From d644f9595f5d669332c3753cab413d97a29b5c54 Mon Sep 17 00:00:00 2001 From: Ishaan Desai Date: Tue, 13 Jan 2026 10:20:07 +0100 Subject: [PATCH 1/6] Rework intro slides of Chapter 5: Testing and CI --- 05_testing_and_ci/intro_slides.md | 77 +++++++++++++++---------------- timetable.md | 9 ++++ 2 files changed, 45 insertions(+), 41 deletions(-) diff --git a/05_testing_and_ci/intro_slides.md b/05_testing_and_ci/intro_slides.md index 7be2e9d7..78afffec 100644 --- a/05_testing_and_ci/intro_slides.md +++ b/05_testing_and_ci/intro_slides.md @@ -29,7 +29,7 @@ slideOptions: ## Learning Goals of the Chapter -- Explain why developing tests is crucial. +- Explain the importance of writing tests for simulation software. - Explain the concepts of unit testing, integration testing and regression testing with the perspective of simulation software. - Write tests using the Python libraries `pytest` and `unittest`. - Write tests in C++ using `Boost.Test`. @@ -41,35 +41,36 @@ slideOptions: ## What is Testing? - Smelling old milk before using it! -- A way to determine if a software is not producing reliable results and if so, what is the reason. -- Manual testing vs. Automated testing. +- A method to ensure that a software is producing reliable results. +- Manual testing vs. automated testing. --- ## Why Should you Test your Software? -- Improve software reliability and reproducibility. -- Make sure that changes (bugfixes, new features) do not affect other parts of software. -- Generally all software is better off being tested regularly. Possible exceptions are very small codes with single users. -- Ensure that a distributed/packaged software actually works. +- Catch errors before software use in the real world. +- Improve software reliability. +- Make sure that changes (bugfixes, features) do not introduce bugs. +- All software is better off being tested regularly. Exceptions could be very small codes with single users. +- Packaged version works as expected. --- ## Nomenclature in Software Testing -- **Fixture**: preparatory set for testing. -- **Actual result**: what the code produces when given the fixture. -- **Expected result**: what the actual result is compared to. -- **Test coverage**: how much of the code do tests touch in one run. +- **Fixture**: preparatory definitions for testing. +- **Actual result**: what the software produces with the fixture. +- **Expected result**: ground truth or true result. +- **Test coverage**: how much of the software do tests run through. --- ## Some Ways to Test Software - Assertions -- Unit testing -- Integration testing -- Regression testing +- Unit tests +- Integration tests +- Regression tests --- @@ -78,11 +79,11 @@ slideOptions: - Principle of *defensive programming*. - Nothing happens when an assertion is true; throws error when false. - Types of assertion statements: - - Precondition - - Postcondition - - Invariant + - Precondition: something that must be true at the start. + - Postcondition: something that is true after execution. + - Invariant: something that is always true. - A basic but powerful tool to test a software on-the-go. -- Assertion statement syntax in Python +- Assertion statement syntax in Python: ```python assert condition, "message" @@ -90,50 +91,48 @@ assert condition, "message" --- -## Unit Testing +## Unit Tests - Catching errors with assertions is good but preventing them is better! - A *unit* is a single function in one situation. - A situation is one amongst many possible variations of input parameters. -- User creates the expected result manually. -- A fixture is a set of inputs used to generate an actual result. +- Expected result is created manually. - Actual result is compared to the expected result, for e.g. using an assertion statement. --- -## Integration Testing +## Integration Tests - Test whether several units work in conjunction. - *Integrate* units and test them together in an *integration* test. - Often more complicated than a unit test and has more test coverage. -- A fixture is used to generate an actual result. - Actual result is compared to the expected result, for e.g. using an assertion statement. --- -## Regression Testing +## Regression Tests -- Generating an expected result is not possible in some situations. +- Generating an expected result is not always possible. - Compare the current actual result with a previous actual result. - No guarantee that the current actual result is correct. -- Risk of a bug being carried over indefinitely. -- Main purpose is to identify changes in the current state of the code with respect to a past state. +- Does not catch long-existing bugs. +- Compare changes in the current state of the software with respect to a past (reliable) state. --- ## Test Coverage -- Coverage is the amount of code a test runs through. +- Coverage is the amount of software that is run by running the tests. - Aim for high test coverage. -- There is a trade-off: extremely high test coverage vs. effort in test development +- Trade-off: extremely high test coverage vs. effort in test development --- ## Comparing Floating-point Variables -- Very often quantities in simulation software are `float` / `double`. -- Such quantities cannot be compared to exact values, an approximation is necessary. -- Comparison of floating point variables needs to be done to a certain tolerance. +- Very often data in simulation software is of type `float` or `double`. +- Such data cannot be compared to exact values, an approximation is necessary. +- Comparing such data up to a certain tolerance. - In `pytest` there is `pytest.approx(value, abs=tol)`. - In `unittest` there is `assertAlmostEqual()`. @@ -141,7 +140,7 @@ assert condition, "message" ## Test-driven Development (TDD) -- Principle of writing a test and then write a code to fulfill the test. +- Idea: write a test and then write part of the software to fulfill the test. - Advantages: - Leads to a robust test along with the implementation. - Eliminates confirmation bias of the user. @@ -151,19 +150,15 @@ assert condition, "message" - False security from tests. - Neglect of overall design. -Source: https://en.wikipedia.org/wiki/Test-driven_development +[TDD on Wikipedia](https://en.wikipedia.org/wiki/Test-driven_development) --- ## Verifying a Test -- Test written as part of a bug-fix: - - Reproduce the bug in the test by ensuring that the test fails. - - Fix the bug. - - Rerun the test to ensure that it passes. -- Test written to increase code coverage: - - Make sure that the first iteration of the test passes. - - Try introducing a small fixable bug in the code to verify if the test fails. +- Reproduce the bug in the test by ensuring that the test fails. +- Fix the bug. +- Rerun the test to ensure that it passes. --- diff --git a/timetable.md b/timetable.md index 03869a62..d470caa7 100644 --- a/timetable.md +++ b/timetable.md @@ -112,3 +112,12 @@ - **20** min.: [Versioning](https://github.com/Simulation-Software-Engineering/Lecture-Material/blob/main/06_miscellaneous/versioning_slides.md) - **20** min.: [Repository Layouts](https://github.com/Simulation-Software-Engineering/Lecture-Material/blob/main/06_miscellaneous/repository_layouts_slides.md) - **20** min.: [DOI, Zenodo, DaRUS](https://github.com/Simulation-Software-Engineering/Lecture-Material/blob/main/06_miscellaneous/doi_zenodo_darus_slides.md) + +## 12.1 – Wed, January 14, 2026 + +- **20** min.: Introduction to Testing: [slides](https://github.com/Simulation-Software-Engineering/Lecture-Material/blob/main/05_testing_and_ci/intro_slides.md) +- **70** min.: Testing Python Code: [slides](https://github.com/Simulation-Software-Engineering/Lecture-Material/blob/main/05_testing_and_ci/python_testing_slides.md), [demo](https://github.com/Simulation-Software-Engineering/Lecture-Material/blob/main/05_testing_and_ci/python_testing_demo.md) + +## 12.2 – Wed, January 14, 2025 + +- **90** min.: [Exercise: Testing Python Code](https://github.com/Simulation-Software-Engineering/Lecture-Material/blob/main/05_testing_and_ci/python_testing_exercise.md) \ No newline at end of file From 6559119a767ea532d896475884a2fbba10e48f2e Mon Sep 17 00:00:00 2001 From: Ishaan Desai Date: Tue, 13 Jan 2026 14:06:43 +0100 Subject: [PATCH 2/6] Revamp functions used in the Python test demo, and their corresponding tests --- .../examples/python_testing/mean_data.csv | 2 - .../examples/python_testing/operations.py | 147 ++++++++++++------ .../python_testing/reordered_data.csv | 1 + .../python_testing/test_operations.py | 90 ++++++----- 05_testing_and_ci/python_testing_demo.md | 49 ++---- 05_testing_and_ci/python_testing_slides.md | 9 +- 6 files changed, 159 insertions(+), 139 deletions(-) delete mode 100644 05_testing_and_ci/examples/python_testing/mean_data.csv create mode 100644 05_testing_and_ci/examples/python_testing/reordered_data.csv diff --git a/05_testing_and_ci/examples/python_testing/mean_data.csv b/05_testing_and_ci/examples/python_testing/mean_data.csv deleted file mode 100644 index e8df09f4..00000000 --- a/05_testing_and_ci/examples/python_testing/mean_data.csv +++ /dev/null @@ -1,2 +0,0 @@ -14,38,57,88,1,18,198 -59.14 diff --git a/05_testing_and_ci/examples/python_testing/operations.py b/05_testing_and_ci/examples/python_testing/operations.py index e4721d4e..2b936f2e 100644 --- a/05_testing_and_ci/examples/python_testing/operations.py +++ b/05_testing_and_ci/examples/python_testing/operations.py @@ -2,62 +2,109 @@ A set of mathematical operations. """ - -def find_max(data): - """ - Find maximum of all elements of a given list - - Parameters - ---------- - data : list - List of data. Elements are numbers - - Returns - ------- - find_max : float - Maximum of list - """ - # Check that the input list has numbers - for n in data: - assert type(n) == int or type(n) == float - - max_num = data[0] # Assume the first number is the maximum - for n in data: - if n > max_num: - max_num = n - - return max_num - - -def find_mean(data): - """ - Find mean of all elements of a given list - - Parameters - ---------- - data : list - List of data. Elements are numbers - - Returns - ------- - float : float - Mean of list - """ - # Check that the input list has numbers - for n in data: - assert type(n) == int or type(n) == float - - return sum(data) / len(data) +class MathOperations: + + def __init__(self, data): + self._data = data + + def reorder_data(self): + """ + Reorder data in ascending order + """ + self._data.sort() + + def find_max(self): + """ + Find maximum of all elements of a given list + Parameters + ---------- + data : list + List of data. Elements are numbers + + Returns + ------- + find_max : float + Maximum of list + """ + # Check that the input list has numbers + for n in self._data: + assert type(n) == int or type(n) == float + + max_num = self._data[0] # Assume the first number is the maximum + for n in self._data: + if n > max_num: + max_num = n + + return max_num + + def find_median(self): + """ + Find median of all elements of a given list + + Parameters + ---------- + data : list + List of data. Elements are numbers + + Returns + ------- + float : float + Median of list + """ + # Check that the input list has numbers + for n in self._data: + assert type(n) == int or type(n) == float + + # Sort the data to find the median + sorted_data = sorted(self._data) + n = len(sorted_data) + + # If odd number of elements, return the middle one + if n % 2 == 1: + return sorted_data[n // 2] + # If even number of elements, return the average of the two middle ones + else: + mid1 = sorted_data[n // 2 - 1] + mid2 = sorted_data[n // 2] + return (mid1 + mid2) / 2 + + def find_mean(self): + """ + Find mean of all elements of a given list + + Parameters + ---------- + data : list + List of data. Elements are numbers + + Returns + ------- + float : float + Mean of list + """ + # Check that the input list has numbers + for n in self._data: + assert type(n) == int or type(n) == float + + total = sum(self._data) + count = len(self._data) + mean = total / count + return mean def main(): - data = [5, 3, 14, 27, 4, 9] + data = [5, 3, 14, 27, 4, 9, 53] + + math_ops = MathOperations(data) - maximum = find_max(data) + maximum = math_ops.find_max() print("Maximum of {} is {}".format(data, maximum)) - mean = find_mean(data) - print("Average of {} is {}".format(data, mean)) + median = math_ops.find_median() + print("Median of {} is {}".format(data, median)) + + mean = math_ops.find_mean() + print("Mean of {} is {}".format(data, mean)) if __name__ == "__main__": diff --git a/05_testing_and_ci/examples/python_testing/reordered_data.csv b/05_testing_and_ci/examples/python_testing/reordered_data.csv new file mode 100644 index 00000000..7cd9074b --- /dev/null +++ b/05_testing_and_ci/examples/python_testing/reordered_data.csv @@ -0,0 +1 @@ +1,17,18,32,43,167,209 diff --git a/05_testing_and_ci/examples/python_testing/test_operations.py b/05_testing_and_ci/examples/python_testing/test_operations.py index 384b15ad..3d105598 100644 --- a/05_testing_and_ci/examples/python_testing/test_operations.py +++ b/05_testing_and_ci/examples/python_testing/test_operations.py @@ -2,87 +2,91 @@ Tests for mathematical operations functions. """ -from operations import find_max, find_mean +from operations import MathOperations import pytest import csv +@pytest.fixture +def math_operations(): + """ + Fixture for MathOperations class + """ + data = [43, 32, 167, 18, 1, 209, 17] + return MathOperations(data) + # Unit test -def test_find_max(): +def test_find_max(math_operations): """ Test operations.find_max """ - # Fixture - data = [43, 32, 167, 18, 1, 209] - # Expected result expected_max = 209 # Actual result - actual_max = find_max(data) + actual_max = math_operations.find_max() # Test assert actual_max == expected_max +# Unit test +def test_reorder_data(math_operations): + """ + Test operations.reorder_data + """ + # Expected result + expected_data = [1, 17, 18, 32, 43, 167, 209] + + # Actual result + math_operations.reorder_data() + actual_data = math_operations._data + + # Test + assert actual_data == expected_data # Unit test -def test_find_mean(): +def test_find_mean(math_operations): """ Test operations.find_mean """ - # Fixture - data = [43, 32, 167, 18, 1, 209] - # Expected result - expected_mean = 78.33 - # expected_mean = pytest.approx(78.33, abs=0.01) - + expected_mean = 69.57 + # Actual result - actual_mean = find_mean(data) - + actual_mean = math_operations.find_mean() + # Test assert actual_mean == expected_mean - # Integration test -def test_mean_of_max(): - """ - Test operations.find_max and operations.find_mean +def test_find_median(math_operations): """ - # Fixture - data1 = [43, 32, 167, 18, 1, 209] - data2 = [3, 13, 33, 23, 498] - + Test operations.find_median + """ # Expected result - expected_mean_of_max = 353.5 - - maximum1 = find_max(data1) - maximum2 = find_max(data2) - + expected_median = 32 + # Actual result - actual_mean_of_max = find_mean([maximum1, maximum2]) + actual_median = math_operations.find_median() # Test - assert actual_mean_of_max == expected_mean_of_max + assert actual_median == expected_median # Regression test -def test_regression_mean(): +def test_reg_reorder_data(math_operations): """ - Test operations.find_mean on a previously generated dataset + Test operations.reorder_data with data from CSV file """ - with open("mean_data.csv") as f: + with open("reordered_data.csv") as f: rows = csv.reader(f, quoting=csv.QUOTE_NONNUMERIC) - # Fixture - data = next(rows) - - # Expected result - reference_mean = next(rows) + + for row in rows: + expected_reordered_data = row # Actual result - actual_mean = find_mean(data) - - expected_mean = pytest.approx(reference_mean[0], abs=0.01) - + math_operations.reorder_data() + actual_reordered_data = math_operations._data + # Test - assert actual_mean == expected_mean + assert actual_reordered_data == expected_reordered_data diff --git a/05_testing_and_ci/python_testing_demo.md b/05_testing_and_ci/python_testing_demo.md index 8ac43cb7..281a3d5d 100644 --- a/05_testing_and_ci/python_testing_demo.md +++ b/05_testing_and_ci/python_testing_demo.md @@ -4,57 +4,26 @@ Example code is in [05_testing_and_ci/examples/python_testing](https://github.co ## Software Code Used -- The file `operations.py` consists of two functions `find_max` and `find_mean` which calculate the maximum and mean of all elements of a list. The `main()` routine in the file applies the functions to a list and prints the output. -- `main()` function in `operations.py` has assertion statements to check if the correct data type is passed to specific functions. -- Assertion statements are the most basic way of testing code and are also used in unit and integration testing. -- Tests are written in the file `test_operations.py`. The `test_*` prefix in the name is required so that pytest detects the file as a testing file. Suffix form `*_test.py` also works. -- In all there are two unit tests, one integration test and one regression test. -- The unit tests test the individual functions `find_max` and `find_mean`. -- The integration test triggers both the functions `find_max` and `find_mean` and checks that the mean is less than the maximum, something that should always be true for a set of numbers. -- The regression test first reads an old data set and a mean value from a CSV file. Then the function `find_mean` is run with the old data set and the new mean value is compared to the old one. +The file [operations.py](examples/python_testing/operations.py) consists of a class `MathOperations` that has the following functions: `reorder_data`, `find_max`, `find_median`, and `find_mean`. The `main()` routine in the file applies the functions to a list and prints the output. ## pytest - pytest is installed using pip: `pip install pytest`. - All tests can be run using the command-line tool called `pytest`. Just type `pytest` in the working directory and hit ENTER. - If pytest is installed in some other way, you might need to run it like `python -m pytest`. -- One test is expected to fail. Reading the error message we understand that the failure occurs because floating-point variable comparison is not handled correctly. -- We need to tell pytest that while comparing two floating-point variables the value needs to be correct only up to a certain tolerance limit. To do this, the expected mean value needs to be changed by uncommenting the line in the following part of the code: +- Tests are written in the file `test_operations.py`. The `test_*` prefix in the name is required so that pytest detects the file as a testing file. Suffix form `*_test.py` also works. +- There are unit tests for the functions `reorder_data`, `find_max`, and `find_mean`. +- There is an integration test for the function `find_median`, and a regression test for `reorder_data`. The regression test reads in a list from a CSV file. +- The test fixture is defined under `@pytest.fixture`. pytest runs this once at the start and stores the returned output while running all other tests. +- One test fails. Error message states that the failure occurs because floating-point variable comparison is not handled correctly. +- While comparing two floating-point variables the value needs to be correct only up to a certain tolerance limit. To do this, the expected mean value needs to be changed in the following way: ```python # Expected result - expected_mean = 78.33 - # expected_result = pytest.approx(78.3, abs=0.01) -``` - -- **Comparing floating point variables** needs to be handled in functions like `find_mean` and is done using `pytest.approx(value, abs)`. The `abs` value is the tolerance up to which the floating-point value will be checked, that is `78.33 +/- 0.01`. -- Even if one test fails, pytest runs all the tests and gives a report on the failing test. The assertion failure report generated my pytest is also more detailed than the usual Python assertion report. When the test fails, the following is observed: - -```bash -========================================== FAILURES =============================================== -_______________________________________ test_find_mean ____________________________________________ - - def test_find_mean(): - """ - Test operations.find_mean - """ - # Fixture - data = [43, 32, 167, 18, 1, 209] - - # Expected result - expected_mean = 78.33 - # expected_result = pytest.approx(78.33, abs=0.01) - - # Actual result - actual_mean = find_mean(data) - - # Test -> assert actual_mean == expected_mean -E assert 78.33333333333333 == 78.33 - -test_operations.py:44: AssertionError +expected_mean = pytest.approx(69.57, rel=1e-2) ``` +- Even if one test fails, pytest runs the rest and gives a report on the failing test. - pytest not only points to the assertion but also prints out the test which has failed. - It is worth noting that pytest is also able to detect tests from other files and run them even if they are not in the conventional test formats. - pytest is able to detect tests in several forms of folder structures, and the folder structures have advantages and disadvantages. More information on this is in the [documentation](https://docs.pytest.org/en/6.2.x/goodpractices.html#choosing-a-test-layout-import-rules). In this demo we use the simplest folder structure where the source file and the test files are at the same directory level. Very often this is not the case. A more organized folder structure can be generated: diff --git a/05_testing_and_ci/python_testing_slides.md b/05_testing_and_ci/python_testing_slides.md index 8c720921..9bb2b31b 100644 --- a/05_testing_and_ci/python_testing_slides.md +++ b/05_testing_and_ci/python_testing_slides.md @@ -29,12 +29,12 @@ slideOptions: ## pytest -- Library to write and manage tests. -- Command-line tool also called `pytest`. -- Install using pip: `pip install -U pytest`. +- Package to write and manage tests. +- Includes command-line interface called `pytest`. +- Install using pip: `pip install pytest`. - All tests need to be in files named `test_*.py`. - Each test function needs to be named as `test_*`. -- pytest gives a detailed description of assertion checks. +- pytest gives a detailed description for failing tests. --- @@ -48,6 +48,7 @@ slideOptions: - Many features like test automation, sharing of setup and shutdown of tests, etc. - Use the base class `unittest.TestCase` to create a test suite. - Command-line interface: `python -m unittest test_module1 test_module2 ...`. +- Part of the Python standard library. --- From 540f0f99ce33f4fffc2ec18c3095a9cef78d8bda Mon Sep 17 00:00:00 2001 From: Ishaan Desai Date: Tue, 13 Jan 2026 15:02:53 +0100 Subject: [PATCH 3/6] Rework unittest content and tests. Add mock testing to the demo --- .../test_operations_unittests.py | 64 +++++++++++-------- 05_testing_and_ci/python_testing_demo.md | 2 +- 05_testing_and_ci/python_testing_slides.md | 1 + timetable.md | 2 +- 4 files changed, 42 insertions(+), 27 deletions(-) diff --git a/05_testing_and_ci/examples/python_testing/test_operations_unittests.py b/05_testing_and_ci/examples/python_testing/test_operations_unittests.py index 36c70611..ab79d81e 100644 --- a/05_testing_and_ci/examples/python_testing/test_operations_unittests.py +++ b/05_testing_and_ci/examples/python_testing/test_operations_unittests.py @@ -1,9 +1,10 @@ """ Tests for mathematical operations functions. """ -from operations import find_max, find_mean +from operations import MathOperations import unittest from unittest import TestCase +from unittest.mock import MagicMock import csv @@ -13,8 +14,8 @@ class TestOperations(TestCase): """ def setUp(self): # Fixture - self.data1 = [43, 32, 167, 18, 1, 209] - self.data2 = [3, 13, 33, 23, 498] + self._data = [43, 32, 167, 18, 1, 209, 17] + self._math_ops = MathOperations(self._data) # Unit test def test_find_max(self): @@ -25,7 +26,7 @@ def test_find_max(self): expected_max = 209 # Actual result - actual_max = find_max(self.data1) + actual_max = self._math_ops.find_max() # Test self.assertEqual(actual_max, expected_max) @@ -36,49 +37,62 @@ def test_find_mean(self): Test operations.find_mean """ # Expected result - expected_mean = 78.33 + expected_mean = 69.57 # Actual result - actual_mean = find_mean(self.data1) + actual_mean = self._math_ops.find_mean() # Test self.assertAlmostEqual(actual_mean, expected_mean, 2) - # Integration test - def test_mean_of_max(self): + # Unit test + def test_unit_find_median(self): """ - Test operations.find_max and operations.find_mean + Test operations.find_median """ # Expected result - expected_mean_of_max = 353.5 + expected_median = 32 - maximum1 = find_max(self.data1) - maximum2 = find_max(self.data2) + # Mock reorder_data to isolate the test + self._math_ops.reorder_data = MagicMock(return_value=[1, 17, 18, 32, 43, 167, 209]) # Actual result - actual_mean_of_max = find_mean([maximum1, maximum2]) - + actual_median = self._math_ops.find_median() + + # Test + self.assertEqual(actual_median, expected_median) + + # Integration test + def test_median(self): + """ + Test operations.find_median + """ + # Expected result + expected_median = 32 + + # Actual result + actual_median = self._math_ops.find_median() + # Test - self.assertEqual(actual_mean_of_max, expected_mean_of_max) + self.assertEqual(actual_median, expected_median) # Regression test - def test_regression_mean(self): + def test_reg_reorder_data(self): """ - Test operations.find_mean on a previously generated dataset + Test operations.reorder_data with data from CSV file """ - with open("mean_data.csv") as f: + with open("reordered_data.csv") as f: rows = csv.reader(f, quoting=csv.QUOTE_NONNUMERIC) - # Fixture - data = next(rows) - - # Expected result - reference_mean = next(rows) + + for row in rows: + expected_reordered_data = row # Actual result - actual_mean = find_mean(data) + self._math_ops.reorder_data() + actual_reordered_data = self._math_ops._data # Test - self.assertAlmostEqual(actual_mean, reference_mean[0], 2) + self.assertEqual(actual_reordered_data, expected_reordered_data) if __name__ == "__main__": # Run the tests diff --git a/05_testing_and_ci/python_testing_demo.md b/05_testing_and_ci/python_testing_demo.md index 281a3d5d..6abc45c9 100644 --- a/05_testing_and_ci/python_testing_demo.md +++ b/05_testing_and_ci/python_testing_demo.md @@ -49,8 +49,8 @@ tests/ - `unittest.main()` provides an option to run the tests from a command-line interface and also from a file. - `setUp` function is executed before all the tests. Similar a clean up function `tearDown` exists. - The intention is to group together sets of similar tests in an instant of `unittest.TestCase` and have multiple such instances. +- A unit test for the function `find_median` is written by mocking the function `reorder_data` using [MagicMock](https://docs.python.org/3/library/unittest.mock.html#magic-mock) from unittest. The function `reorder_data` is mocked so that the function `find_median` can be tested in isolation. - Decorators such as `@unittest.skip`, `@unittest.skipIf`, `@unittest.expectedFailure` can be used to gain flexibility over working of tests. -- `unittest.TestCase.subTest` can be used to distinguish parameters inside the body of a test. ## coverage diff --git a/05_testing_and_ci/python_testing_slides.md b/05_testing_and_ci/python_testing_slides.md index 9bb2b31b..a16f59aa 100644 --- a/05_testing_and_ci/python_testing_slides.md +++ b/05_testing_and_ci/python_testing_slides.md @@ -47,6 +47,7 @@ slideOptions: - Python framework specifically designed to run, monitor and automate unit tests. - Many features like test automation, sharing of setup and shutdown of tests, etc. - Use the base class `unittest.TestCase` to create a test suite. +- `MagicMock` from `unittest.mock` used for mock testing. - Command-line interface: `python -m unittest test_module1 test_module2 ...`. - Part of the Python standard library. diff --git a/timetable.md b/timetable.md index d470caa7..faab9ad1 100644 --- a/timetable.md +++ b/timetable.md @@ -120,4 +120,4 @@ ## 12.2 – Wed, January 14, 2025 -- **90** min.: [Exercise: Testing Python Code](https://github.com/Simulation-Software-Engineering/Lecture-Material/blob/main/05_testing_and_ci/python_testing_exercise.md) \ No newline at end of file +- **90** min.: [Exercise: Testing Python Code](https://github.com/Simulation-Software-Engineering/Lecture-Material/blob/main/05_testing_and_ci/python_testing_exercise.md) From a5d73140c3fd24a7ad4091ab88d1bf1c63ac55e4 Mon Sep 17 00:00:00 2001 From: Ishaan Desai Date: Tue, 13 Jan 2026 16:17:31 +0100 Subject: [PATCH 4/6] Rework tox and coverage --- .../examples/python_testing/tox.toml | 8 +++++-- 05_testing_and_ci/python_testing_demo.md | 24 ++++++------------- 2 files changed, 13 insertions(+), 19 deletions(-) diff --git a/05_testing_and_ci/examples/python_testing/tox.toml b/05_testing_and_ci/examples/python_testing/tox.toml index 646b61d3..fea5a2b2 100644 --- a/05_testing_and_ci/examples/python_testing/tox.toml +++ b/05_testing_and_ci/examples/python_testing/tox.toml @@ -1,7 +1,11 @@ requires = ["tox>=4"] -env_list = ["testing"] +env_list = ["pytest_testing", "unittest"] -[env.testing] +[env.pytest_testing] description = "Run pytest" deps = ["pytest>=8"] commands = [["pytest"]] + +[env.unittest] +description = "Run unittest" +commands = [["python", "-m", "unittest", "test_operations_unittests"]] diff --git a/05_testing_and_ci/python_testing_demo.md b/05_testing_and_ci/python_testing_demo.md index 6abc45c9..90168a9b 100644 --- a/05_testing_and_ci/python_testing_demo.md +++ b/05_testing_and_ci/python_testing_demo.md @@ -1,6 +1,6 @@ # Notes for Demos of Python Testing Frameworks -Example code is in [05_testing_and_ci/examples/python_testing](https://github.com/Simulation-Software-Engineering/Lecture-Material/blob/main/05_testing_and_ci/examples/python_testing) +Example code is in [05_testing_and_ci/examples/python_testing](examples/python_testing/). ## Software Code Used @@ -44,6 +44,7 @@ tests/ - Base class `unittest.TestCase` is used to create a test suite. - Each test is now a function of a class which is derived from the class `unittest.TestCase`. - The same tests as for `pytest` are implemented using `unittest` in the file `test_operations_unittests.py`. The tests are functions of a class named `TestOperations` which tests our mathematical operations. The class `TestOperations` is derived from `unittest.TestCase`. +- unittest discovers tests based on identifiers. A [test discovery](https://docs.python.org/3/library/unittest.html#test-discovery) mechanism is followed. - unittest can be run as a Python module: `python -m unittest`. - unittest.TestCase offers functions like `assertEqual`, `assertAlmostEqual`, `assertTrue`, and more ([see unittest.TestCase documentation](https://docs.python.org/3/library/unittest.html#unittest.TestCase)) for use instead of the usual assertion statements. These statements ensure that test runner to accumulate all test results and generate a test report. - `unittest.main()` provides an option to run the tests from a command-line interface and also from a file. @@ -54,13 +55,14 @@ tests/ ## coverage -- Installing coverage using pip: `pip install coverage`. -- Testing frameworks can be run via coverage. Lets take our first example and run pytest via coverage: +- Install coverage using pip: `pip install coverage`. +- Testing frameworks can be run via coverage. Run pytest via coverage: ```bash coverage run -m pytest ``` +- The `-m` flag tells coverage to run `pytest` module and measure test coverage. This flag would not exist if a Python file was directly being run. - coverage does not generate any output immediately as it would interfere with the test output. - Code coverage information is stored in a file `.coverage` in the working directory. This information can be viewed using: @@ -80,19 +82,7 @@ coverage html - Environment orchestrator to setup and execute various tools for a project. - `tox` creates virtual environments to run each tools in. -- `tox.toml` file: - -```toml -requires = ["tox>=4"] -env_list = ["testing"] - -[env.testing] -description = "Run pytest" -deps = ["pytest>=8"] -commands = [["pytest"]] -``` - +- `tox.toml` file consists of two environments, one to run pytest and one to run unittest. - Global settings defined under section at the top of the `tox.toml` file. - Start tox by running the command `tox` in the directory where the `tox.toml` exists. -- tox takes more time the first time it is run as it creates the necessary virtual environments. Virtual environment setup can be found in the `.tox` repository. -- Observe that tox starts a virtual environment, installs the dependency (here `pytest`) and runs `pytest`. +- First execution of tox is slow because it creates the necessary virtual environments. Virtual environment setups are in the `.tox` repository. From 308b52fefe090bf0ee35b956fa1ab14ccbd58051 Mon Sep 17 00:00:00 2001 From: Ishaan Desai Date: Tue, 13 Jan 2026 16:57:49 +0100 Subject: [PATCH 5/6] Refer to Git exercise in the intro to testing --- 05_testing_and_ci/intro_slides.md | 1 + 1 file changed, 1 insertion(+) diff --git a/05_testing_and_ci/intro_slides.md b/05_testing_and_ci/intro_slides.md index 78afffec..42eb800e 100644 --- a/05_testing_and_ci/intro_slides.md +++ b/05_testing_and_ci/intro_slides.md @@ -43,6 +43,7 @@ slideOptions: - Smelling old milk before using it! - A method to ensure that a software is producing reliable results. - Manual testing vs. automated testing. +- You already saw tests in the [Git exercise](https://github.com/Simulation-Software-Engineering/git-exercise). --- From 6473626f9cf6130b574578be0aa5e74729062c50 Mon Sep 17 00:00:00 2001 From: Ishaan Desai Date: Tue, 13 Jan 2026 17:21:39 +0100 Subject: [PATCH 6/6] Cut unnecessary things --- 05_testing_and_ci/python_testing_slides.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/05_testing_and_ci/python_testing_slides.md b/05_testing_and_ci/python_testing_slides.md index a16f59aa..102eed9e 100644 --- a/05_testing_and_ci/python_testing_slides.md +++ b/05_testing_and_ci/python_testing_slides.md @@ -31,7 +31,6 @@ slideOptions: - Package to write and manage tests. - Includes command-line interface called `pytest`. -- Install using pip: `pip install pytest`. - All tests need to be in files named `test_*.py`. - Each test function needs to be named as `test_*`. - pytest gives a detailed description for failing tests. @@ -48,7 +47,7 @@ slideOptions: - Many features like test automation, sharing of setup and shutdown of tests, etc. - Use the base class `unittest.TestCase` to create a test suite. - `MagicMock` from `unittest.mock` used for mock testing. -- Command-line interface: `python -m unittest test_module1 test_module2 ...`. +- Command-line interface: `python -m unittest`. - Part of the Python standard library. --- @@ -59,7 +58,7 @@ slideOptions: ## coverage -- Python library to check code coverage. Installation: `pip install coverage`. +- Python library to check code coverage - Testing frameworks can be run via coverage to generate code coverage data while tests run. - Code coverage information can be viewed on the terminal using: `coverage report -m`.