==================== Testing ==================== Python / Django tests ======================= The project is set-up to run tests in CI after every push to repository. To run full test suite in the project locally: .. code-block:: bash python manage.py test Model bakery ------------- The project uses `model-bakery `_ package to create dummy models for testing. .. important:: Default field values can be hardcoded using :class:`MegBaker.make` in :file:`model_generators`. An example of this: .. code-block:: python if issubclass(self.model, CustomField): if not attrs.get('widget'): attrs["widget"] = self.DEFAULT_CUSTOM_FIELD_WIDGET This sets the widget to always be 'TextInput' so it never fails pipelines when populated with a widget that needs extra configuration. Test mix-ins ------------- The project contains some helpful mixins to easily create commonly used data for test cases. The mix-ins typically contains configuration fields, that can be overwritten by test to customize test data, and data fields containing objects created by the mixin. .. code-block:: python :caption: Sample use of test mix-ins in a test case class AuditsListViewTest(HTMXTestMixin, ViewTestMixin, AuditFormTestMixin, TestCase): observation_model = HandHygieneAuditObservation num_observations = 2 observation_kwargs = { 'action': HAND_HYGIENE_EVENT_CHOICES[0][0], } auditor_perms = 'megforms.view_auditsession', 'megforms.change_auditsession', .. automodule:: megforms.test_utils :members: .. automodule:: megdocs.test_utils :members: .. automodule:: utils.htmx_test :members: Error log entries ---------------------------- ``ERROR``-level log entries logged during tests are escalated to a test failure by raising :exc:`~utils.error_log_handler.EscalatedErrorException`. This helps ensure that no errors go unnoticed while running tests. This behaviour is controlled by :envvar:`ESCALATE_ERRORS`. It is enabled by default in CI. If you're running tests locally, it is unlikely to be enabled by default. If you expect your test to log errors, you can suppress the error from being logged by: Add assertion ^^^^^^^^^^^^^^ To ensure the expected error is logged, use ``TestCase.assertLogs``. .. code-block:: python with self.assertLogs('meg_forms', level=logging.ERROR) as errors: ... error, = errors.output self.assertIn("Expected message to be logged", error) Explicitly Suppress error ^^^^^^^^^^^^^^^^^^^^^^^^^^ You can suppress all messages at ``ERROR`` or other level to be logged by wrapping the test code in :class:`~megforms.test_utils.SuppressLogging` decorator. Use this approach if you assert expected behaviour by other means, and the error log is a byproduct that is not required for the test to pass. .. code-block:: python :caption: by default, ``SuppressLogging`` supresses the "meg_forms" logger with SuppressLogging(): ... .. code-block:: python :caption: Suppress request errors when making requests known to get a 404 or similar response with SuppressLogging('django.request'): response = self.client.get('invalid/url') Running tests -------------- Tests can be ran using django's ``manage.py test`` command. Tests also run automatically in CI. To replicate CI environment locally, use the provided :file:`docker-compose.test.yaml` file. Run multiple tests in parallel ------------------------------- You can add ``--parallel`` option to run tests concurrently. .. important:: When running tests in :command:`docker-compose` using the development :file:`docker-compose.yaml`, celery jobs are executed asynchronously. This causes some tests to fail. To work around that set :envvar:`CELERY_TASK_ALWAYS_EAGER` to ``True`` in the run configuration. .. code-block:: shell :caption: Run tests in docker with one process per CPU TEST_ARGS="--parallel" docker-compose -f docker-compose.test.yml up --build --exit-code-from cms --abort-on-container-exit --renew-anon-volumes --force-recreate Run test multiple times ^^^^^^^^^^^^^^^^^^^^^^^^ You can add ``@repeat_test(times)`` decorator to a test to run the test multiple times, repeat is imported from ``megforms.test_utils``. .. important:: Make sure to remove ``repeat_test`` and its import when done, repeating is exhaustive and flake would mark an unused import. Troubleshooting test CI jobs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ To run exact subset of tests ran by specific job in CI, use :envvar:`NUM_TEST_GROUPS` and :envvar:`TEST_GROUP` to break tests into groups and test only specific group: .. code-block:: shell :caption: Split tests into 6 groups and run only the 1st group export TEST_GROUP=1 export NUM_TEST_GROUPS=9 docker-compose -f docker-compose.test.yml up --build --exit-code-from cms --abort-on-container-exit --renew-anon-volumes --force-recreate Specify test args ~~~~~~~~~~~~~~~~~~~~~~~ You can set :envvar:`TEST_ARGS` to control the arguments being passed to the test command within :command:`docker compose`, selecting subtest or tests to run, or overriding verbosity .. code-block:: shell :caption: Set verbosity and select which tests class to run export TEST_ARGS="--verbosity 2 dashboard_widgets.tests.test_benchmark_widgets.BenchmarkWidgetTest" docker-compose -f docker-compose.test.yml up --build --exit-code-from cms --abort-on-container-exit --renew-anon-volumes --force-recreate Print all SQL queries ~~~~~~~~~~~~~~~~~~~~~~~ Add ``--debug-sql --verbosity=2`` argument to the test command to print all SQL queries ran inside the tests. Testing manually ================= When testing manually, it is important to re-build the project to ensure it contains all the latest packages:: docker-compose up --build You may find that the site crashes if you have applied migrations from another branch. If that's the case, delete the test database and proceed as normal - it will take longer to bring up the project and set-up the database from scratch:: docker-compose down -v docker-compose up --build Error reporting ----------------- If you encounter any crashes during manual testing, include error output in the report. When project is ran locally, errors are logged to the terminal. Staging site uses `GitLab `_ to capture errors. .. seealso:: :ref:`Crash reports` Test data ----------- When project is set-up, it will run the script that will populate the database with dummy data. Additional options are possible if you need to generate more test data. Most of these options are available via commandline script :command:`manage.py`. Populate audits with observations ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Command: :command:`./manage.py generate_observations` Populates database with :term:`observations ` for given form. It creates the specified number of :term:`sessions `, the the given number of observations each (sessions × observations) This action is also available in :url:`django admin `. Usage:: ./manage.py generate_observations [form_id] [num_observations] [num_sessions] Create dummy user accounts ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Command: :command:`./manage.py create_dummy_users` Create :term:`user accounts ` in bulk, giving them access to all :term:`forms
` within given :term:`institution`, and given set of :term:`permissions `. Users will be automatically assigned to the specified :term:`groups ` if given. The usernames of the new accounts will be based off given username with a number appended to it. Usage:: ./manage.py create_dummy_users [institution_id] [num_users] [username] [--perm] [--group] # Example: ./manage.py create_dummy_users 1 100 issuehandler --perm qip.view_issue --perm qip.change_issue --group 'Admin Basic' Generate dummy documents for :term:`MEG Docs` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Command: :command:`./manage.py generate_docs` Adds dummy documents to institution Usage:: ./manage.py generate_docs [institution_id] [num_documents] Create dummy :term:`forms ` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Creates empty forms, without and questions or observations. Usage:: ./manage.py create_dummy_forms [institution_id] [num_forms] [name] [--group-level] [--qip] [--assign-auditors] [--form-type=audit] # Example ./manage.py create_dummy_forms 1 10 testform --group-level --qip --assign-auditors --form-type=inspection Integration Testing ==================== For integration testing the project uses `Playwright `_ to generate and run tests. Playwright provides a GUI for generating test scripts in python and pytest compatible tools for running the test scripts in docker and in CI. Installation -------------- Install playwright and install browsers:: pip install playwright playwright install Generating Tests ------------------- Launch the Playwright codegen test generator tool targeting localhost:: playwright codegen http://localhost:8000 You could target a staging site, or production by replacing the localhost url:: playwright codegen https://audits.megsupporttools.com/ .. important:: When running the Playwright inspector, ensure the target is set to "Pytest". .. figure:: img/playwright-codegen.png Code generation window opened with ``playwright codegen http://localhost:8000`` When ready to save the your script, save it to the :file:`integration_tests/tests` folder. Be sure to give the file a meaningful name. Formatting Tests ^^^^^^^^^^^^^^^^^^ Pytest compatibility: Your test scripts need to be pytest compatible. Script file names should always follow this format: ``test_*.py``. Likewise test function names should also begin with ``test_``, for example:: def test_login_failed(playwright: Playwright) -> None: ... Browser host configurability: When the test script is generated by codegen, the browser host it's targetting will be hardcoded. You'll need to manually update this so that it reads this value from environment variables. This will allow the test to be run in different environments. You can import ``BROWSER_HOST`` from :file:`settings_playwright.py` into your test script for this purpose. For example:: from playwright.sync_api import Playwright, sync_playwright, expect from integration_tests.settings_playwright import BROWSER_HOST def test_login_failed(playwright: Playwright) -> None: browser = playwright.chromium.launch() context = browser.new_context() page = context.new_page() page.goto(f"{BROWSER_HOST}/accounts/login?next=/") ... Running Tests ---------------- First ensure you've an instance of mat-cms running before running playwright. You can use docker-compose to run the dev container:: docker-compose up -d You can then run the integration tests using the playwright docker-compose:: docker-compose up playwright --build You can also target specific test scripts:: docker-compose run playwright pytest tests/test_login.py JavaScript Tests ==================== This project uses `Jest `_ to test JavaScript code Set-up ------- Before you can run tests, you need to install `nodejs` and `jest`. .. code-block:: shell npm install Writing tests --------------- * Tests should be added to :file:`js_tests/tests/{app}/` folder, where ``{app}`` is the name of the django app where the js file is located. * File name should match tested file name, but ending with `.test.js` * Any third party libraries referenced in the tested code should either be: * mocked * or included from local source (instead of being installed using ``npm``) Running tests -------------- By default, test will run automatically in CI, but during development you should run the tests locally using one of the available methods In terminal ^^^^^^^^^^^^^^^^^^^^^^ In the project's root directory run ``npm test`` command. To pass additional arguments to Jest, you need to add ``--`` separator. e.g.:: npm test -- js_tests/tests/megforms.test.js In PyCharm ^^^^^^^^^^^^^^^^^^^^^^ PyCharm Supports Jest tests. You can create a run configuration for Jest pointing at the project's root directory and run the tests in the IDE. This allows you to debug tests by stepping through js code, and inspect variables. Using docker compose ^^^^^^^^^^^^^^^^^^^^^^ Run the ``jest`` service in docker. It mounts the project in an NPM environment and runs the tests. ```shell docker compose up jest ```