====================
Testing
====================
Python / Django tests
=======================
The project is set-up to run tests in CI after every push to repository.
To run full test suite in the project locally:
.. code-block:: bash
python manage.py test
Model bakery
-------------
The project uses `model-bakery `_ package to create dummy models for testing.
.. important::
Default field values can be hardcoded using :class:`MegBaker.make` in :file:`model_generators`.
An example of this:
.. code-block:: python
if issubclass(self.model, CustomField):
if not attrs.get('widget'):
attrs["widget"] = self.DEFAULT_CUSTOM_FIELD_WIDGET
This sets the widget to always be 'TextInput' so it never fails pipelines when populated with a widget that needs extra configuration.
Test mix-ins
-------------
The project contains some helpful mixins to easily create commonly used data for test cases.
The mix-ins typically contains configuration fields, that can be overwritten by test to customize test data,
and data fields containing objects created by the mixin.
.. code-block:: python
:caption: Sample use of test mix-ins in a test case
class AuditsListViewTest(HTMXTestMixin, ViewTestMixin, AuditFormTestMixin, TestCase):
observation_model = HandHygieneAuditObservation
num_observations = 2
observation_kwargs = {
'action': HAND_HYGIENE_EVENT_CHOICES[0][0],
}
auditor_perms = 'megforms.view_auditsession', 'megforms.change_auditsession',
.. automodule:: megforms.test_utils
:members:
.. automodule:: megdocs.test_utils
:members:
.. automodule:: utils.htmx_test
:members:
Error log entries
----------------------------
``ERROR``-level log entries logged during tests are escalated to a test failure by raising :exc:`~utils.error_log_handler.EscalatedErrorException`.
This helps ensure that no errors go unnoticed while running tests.
This behaviour is controlled by :envvar:`ESCALATE_ERRORS`. It is enabled by default in CI. If you're running tests locally, it is unlikely to be enabled by default.
If you expect your test to log errors, you can suppress the error from being logged by:
Add assertion
^^^^^^^^^^^^^^
To ensure the expected error is logged, use ``TestCase.assertLogs``.
.. code-block:: python
with self.assertLogs('meg_forms', level=logging.ERROR) as errors:
...
error, = errors.output
self.assertIn("Expected message to be logged", error)
Explicitly Suppress error
^^^^^^^^^^^^^^^^^^^^^^^^^^
You can suppress all messages at ``ERROR`` or other level to be logged by wrapping the test code in :class:`~megforms.test_utils.SuppressLogging` decorator.
Use this approach if you assert expected behaviour by other means, and the error log is a byproduct that is not required for the test to pass.
.. code-block:: python
:caption: by default, ``SuppressLogging`` supresses the "meg_forms" logger
with SuppressLogging():
...
.. code-block:: python
:caption: Suppress request errors when making requests known to get a 404 or similar response
with SuppressLogging('django.request'):
response = self.client.get('invalid/url')
Running tests
--------------
Tests can be ran using django's ``manage.py test`` command. Tests also run automatically in CI.
To replicate CI environment locally, use the provided :file:`docker-compose.test.yaml` file.
Run multiple tests in parallel
-------------------------------
You can add ``--parallel`` option to run tests concurrently.
.. important:: When running tests in :command:`docker-compose` using the development :file:`docker-compose.yaml`, celery jobs are executed asynchronously.
This causes some tests to fail. To work around that set :envvar:`CELERY_TASK_ALWAYS_EAGER` to ``True`` in the run configuration.
.. code-block:: shell
:caption: Run tests in docker with one process per CPU
TEST_ARGS="--parallel" docker-compose -f docker-compose.test.yml up --build --exit-code-from cms --abort-on-container-exit --renew-anon-volumes --force-recreate
Run test multiple times
^^^^^^^^^^^^^^^^^^^^^^^^
You can add ``@repeat_test(times)`` decorator to a test to run the test multiple times, repeat is imported from ``megforms.test_utils``.
.. important:: Make sure to remove ``repeat_test`` and its import when done, repeating is exhaustive and flake would mark an unused import.
Troubleshooting test CI jobs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To run exact subset of tests ran by specific job in CI, use :envvar:`NUM_TEST_GROUPS` and :envvar:`TEST_GROUP`
to break tests into groups and test only specific group:
.. code-block:: shell
:caption: Split tests into 6 groups and run only the 1st group
export TEST_GROUP=1
export NUM_TEST_GROUPS=9
docker-compose -f docker-compose.test.yml up --build --exit-code-from cms --abort-on-container-exit --renew-anon-volumes --force-recreate
Specify test args
~~~~~~~~~~~~~~~~~~~~~~~
You can set :envvar:`TEST_ARGS` to control the arguments being passed to the test command within :command:`docker compose`, selecting subtest or tests to run, or overriding verbosity
.. code-block:: shell
:caption: Set verbosity and select which tests class to run
export TEST_ARGS="--verbosity 2 dashboard_widgets.tests.test_benchmark_widgets.BenchmarkWidgetTest"
docker-compose -f docker-compose.test.yml up --build --exit-code-from cms --abort-on-container-exit --renew-anon-volumes --force-recreate
Print all SQL queries
~~~~~~~~~~~~~~~~~~~~~~~
Add ``--debug-sql --verbosity=2`` argument to the test command to print all SQL queries ran inside the tests.
Testing manually
=================
When testing manually, it is important to re-build the project to ensure it contains all the latest packages::
docker-compose up --build
You may find that the site crashes if you have applied migrations from another branch.
If that's the case, delete the test database and proceed as normal - it will take longer to bring up the project and set-up the database from scratch::
docker-compose down -v
docker-compose up --build
Error reporting
-----------------
If you encounter any crashes during manual testing, include error output in the report.
When project is ran locally, errors are logged to the terminal.
Staging site uses `GitLab `_ to capture errors.
.. seealso:: :ref:`Crash reports`
Test data
-----------
When project is set-up, it will run the script that will populate the database with dummy data.
Additional options are possible if you need to generate more test data.
Most of these options are available via commandline script :command:`manage.py`.
Populate audits with observations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Command: :command:`./manage.py generate_observations`
Populates database with :term:`observations ` for given form.
It creates the specified number of :term:`sessions `, the the given number of observations each (sessions × observations)
This action is also available in :url:`django admin `.
Usage::
./manage.py generate_observations [form_id] [num_observations] [num_sessions]
Create dummy user accounts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Command: :command:`./manage.py create_dummy_users`
Create :term:`user accounts ` in bulk, giving them access to all :term:`forms