Parametrized tests - Pytest
introduction
I was working on a personal project where I used parametrized test functions and I decided to make a small blog post about it because, why not?
To begin with, testing is an essential component of building systems, in that it ensures that a given system behaves as expected when subjected to different conditions. As much as possible, every system you build, no matter how small, should have some tests to verify it’s behaviour. In this post we will be exploring parametrized testing using Pytest.
pytest
Pytest is a mature full-featured Python testing tool that helps you write better programs.1 It is not restricted to a specific framework and hence can be used with almost all - if not all - Python frameworks including vanilla Python programs. There are configuration file formats which allows you configure the behaviour of pytest. They include pytest.ini, pyproject.toml, tox.ini, setup.cfg among others. Pytest is a third-party package, requires Python 3.7+ or PyPy3 and can be installed with the command below:
pip install pytest
Now, down to the specific aspect of Pytest we will be discussing - parametrized tests. Pytest provides a wide range of methods for testing, however, we will discuss the parametrized approach. In my opinion, parametrized tests follows the Given-When-Then
style of representing tests, and is a subset of Behaviour-Driven Development (BDD).
a fastAPI service
Assuming we have a FastAPI service with a function to retrieve a user object by id as below:
# main.py
import threading
import time
from fastapi import FastAPI, HTTPException, status
from fastapi.responses import JSONResponse
app = FastAPI(title="User Service")
toggle_on = False
lock = threading.Lock()
@app.get("/users/{id}/")
async def retrieve_user_by_id(id: str):
"""
Returns a user by id
:return:
"""
global toggle_on
if not id:
return JSONResponse(
content={"message": "Please provide a user id"}
)
if id == "7c11e1ce2741":
time.sleep(0.3)
return JSONResponse(
status_code=status.HTTP_200_OK,
content={
"id": "7c11e1ce2741",
"first_name": "John",
"last_name": "Doe"
}
)
elif id == "e6f24d7d1c7e":
time.sleep(0.3)
with lock:
toggle_on = not toggle_on
if toggle_on:
return JSONResponse(
status_code=status.HTTP_200_OK,
content={
"id": "e6f24d7d1c7e",
"first_name": "Jane",
"last_name": "Doe"}
)
else:
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="Internal Server Error"
)
else:
raise HTTPException(
status_code=status.HTTP_404_NOT_FOUND, detail="User not found"
)
testing approach
Although this function appears simple, we need to write tests for all the different scenarios to ensure it behaves as expected. To achieve this, we will be using the parametrized testing approach in Pytest. First of all, we will create our pytest config file to define our expectations for running the pytest command.
[pytest]
;pytest.ini
python_files = conftest.py tests.py test_*.py *_tests.py *_test.py
filterwarnings =
ignore::DeprecationWarning
From the config file, the [pytest] header indicates that the following settings apply to the Pytest framework. The python_files option specifies the naming patterns for files that Pytest should recognize as test files and filterwarnings option instructs Pytest to ignore all specified warning types.
We will start by defining a pytest fixture, client, to enable us use the FastAPI test client in our tests.
# conftest.py
import pytest
from fastapi.testclient import TestClient
from app.main import app
@pytest.fixture()
def client():
yield TestClient(app)
Now that we have the test client in place, let’s define the test cases.
# main_test.py
import pytest
test_data = [
(
"7c11e1ce2741",
200,
{"first_name": "John", "id": "7c11e1ce2741", "last_name": "Doe"},
),
("unknownID", 404, {"detail": "User not found"}),
]
@pytest.mark.parametrize("id, status_code, expected_res_data", test_data)
@pytest.mark.asyncio
async def test_retrieve_user_by_id(client, id, status_code, expected_res_data):
res = client.get(f"/users/{id}/")
assert res.status_code == status_code
assert res.json() == expected_res_data
From the test above, we are able to define variables to represent input and expected output for a given action, because it is annotated as a parametrized test. E.g.:
Given the first scenario, When I retrieve by user id
7c11e1ce2741
, Then the expected status code should be200
and the response data should be {“first_name”: “John”, “id”: “7c11e1ce2741”, “last_name”: “Doe”}
Hence, for a given test, you can define a set of expected behaviour with a list (test_data) which contains inputs and their corresponding expected outputs after running the test.2
The test function accepts four parameters, with the final parameter, test_data, being the actual data used to execute the test and validate the assertions. A GET request is sent to the user service endpoint using the provided user ID. Assertions are then performed to ensure that for each given ID, the response returned by the API matches the expected results specified in the test data list. When the test function above is ran, two test cases are executed as expected with output as shown below:
tests/main_test.py::test_retrieve_user_by_id[7c11e1ce2741-200-res_data0] PASSED [ 50%]
tests/main_test.py::test_retrieve_user_by_id[unknownID-404-res_data1] PASSED [100%]
I simulated a toggle effect for user id e6f24d7d1c7e to return a 200 status code if toggle_on is True and a 500 if not. This can also be tested as shown below:
# conftest.py
...
@pytest.fixture()
def toggle():
# a workaround to alternate between True/False
if hasattr(toggle, "state"):
toggle.state = True
toggle.state = not toggle.state
yield toggle.state
# main_test.py
...
@pytest.mark.parametrize("toggle, status, response", [
(
True,
200,
{"id": "e6f24d7d1c7e", "first_name": "Jane", "last_name": "Doe"}
),
(False, 500, {"detail": "Internal Server Error"}),
])
@pytest.mark.asyncio
async def test_retrieve_user_by_id_toggle(
client, toggle, status, response
):
res = client.get("/users/e6f24d7d1c7e/")
assert res.status_code == status
assert res.json() == response
# output
tests/main_test.py::test_retrieve_user_by_id_toggle[True-200-expected_response0] PASSED [ 50%]
tests/main_test.py::test_retrieve_user_by_id_toggle[False-500-expected_response1] PASSED [100%]
conclusion
Instead of writing separate test functions for each scenario, we can condense them into a couple of test functions by passing the inputs and expected outcomes as a list. This approach reduces the number of test functions and, consequently, the time taken to write tests. Of course, there are situations where parametrized tests might not be ideal, but if it works for your use case, try it out!
-
Pytest’s official documentation describes it as a mature full-featured Python testing tool that helps you write better programs. ↩︎
-
The test can be ran on the terminal with pytest <file_name.py>. However, having the config file allows you to run just pytest and it will automatically run all test files that match the defined python_files regex in pytest.ini. Adding a -vv flag runs in it in verbose mode. ↩︎