Pytest notes
| Syntaxish: syntax and language · type stuff · changes and py2/3 · decorators · importing, modules, packages · iterable stuff · concurrency · exceptions, warnings
IO: networking and web · filesystem Data: Numpy, scipy · pandas, dask · struct, buffer, array, bytes, memoryview · Python database notes Image, Visualization: PIL · Matplotlib, pylab · seaborn · bokeh · plotly
Stringy: strings, unicode, encodings · regexp · command line argument parsing · XML Date and time: date and time
speed, memory, debugging, profiling · Python extensions · semi-sorted |
pytest notes
Pytest wants to find tests itself.
This is an inversion of control thing, so no matter how complex tests get, pytest can do everything for us without us having to hook it in specifically.
What files does pytest pick up to be tested?
- the filenames specified
- if none specified:
- filenames named like test_*.py or *_test.py on the directory tree under the current dir(verify)
- You can control this discovery, see e.g. https://docs.pytest.org/en/6.2.x/example/pythoncollection.html
How does pytest decide what code to run as tests?
- functions prefixed test at module scope (verify)
- classes prefixed Test, and then functions prefixed test inside them
- ...but only if that class does not have a constructor (__init__).
- These classes are not intended to be classes with state (and it does not actually instantiate the class(verify))
- just to collect functions (and potentially pollute a namespace less).
- classes subclassed from unittest.TestCase (see unittest)
- by marker, e.g. pytest -m slow picks up things decorated with @pytest.mark.slow
- useful to define groups of tests, and run specific subsets
What does pytest actually consider success/failure?
Roughly: each test function will be a
- success:
- if it returns, AND
- if all asserts contained (if any) are successful
- failure: on the first failing assert
- failure: on the first exception
There are "assert for me" functions, including:
- unittest.assertEqual(a, b)
- unittest.assertNotEqual(a, b)
- unittest.assertTrue(x)
- unittest.assertFalse(x)
- unittest.assertIs(a, b)
- unittest.assertIsNot(a, b)
- unittest.assertIsNone(x)
- unittest.assertIsNotNone(x)
- unittest.assertIn(a, b)
- unittest.assertNotIn(a, b)
- unittest.assertIsInstance(a, b)
- unittest.assertNotIsInstance(a, b)
...but many of those are shorter to write in your own assert
How do I test that something should throw an exception(/warning)?
Has a few alternatives, also varying a little with whether you're testing that it should raise an error or that it doesn't.
The context manager form seems a brief-and-more-flexible way to filter for a specific error type and specific error text:
with pytest.raises(ValueError, match=r'.*found after.*'):
# code that whines about some value parsing
You could also e.g. catch the specific error as you normally would, and
If you want to add a more useful message, you can then use pytest.fail to signal that is a failure.
try:
calculate_average( data )
except ZeroDivisionError as exc:
pytest.fail("a division by zero in this code suggests preprocessing removed all cases")
If you were testing that it does
try:
calculate_average( data )
pytest.fail("test should have borked out with a division by zero")
except ZeroDivisionError as exc:
pass # this is what we want
...but consider the context manager form mentioned above, its signaled intent is often clearer.
For warnings, pytest.warns can be used as a context manager that works much the same as that exception one above
- and the context object variant is probably easiest here
- note: warnings.warn() by default emits a UserWarning - see https://docs.python.org/3/library/warnings.html#warning-categories
with pytest.warns(UserWarning, match=r'.*deprecated.*'): #
Showing details
For example, asserts on simpler comparisons will lead pytest to pick up the values that didn't compare as you wanted:
- comparing long strings: a context diff is shown
- comparing long sequences: first failing index is shown
- comparing dicts: different entries are shown
This can be customized, which is sometimes worth it to get more useful output from pytest runs.
On fixtures/mocking
Benchmarking,_performance_testing,_load_testing,_stress_testing,_etc.#Mocking.2C_monkey_patching.2C_fixtures Fixtures create reusable state/helpers for tests, and are great if you use the same data/objects.
In pytest, they are functions that are called before your function.
pytest has a few different things you could call fixtures.
Some given fixtures
Having a test function with a specifically named keyword, you get in some extra behaviour when pytest runs this test. Consider:
def test_download_to_file( tmp_path ):
tofile_path = tmp_path / "testfile" # this syntax works because tmp_path is a pathlib.Path object
download('https://www.example.com', tofile_path=tofile_path)
assert os.path.exists( tofile_path )
tmp_path means "create directory for you, hand it in for you to use, and we will clean it up afterwards", which is a great helper you would otherwise have to write yourself (and test in itself).
For some other given fixtures, see e.g.
- https://docs.pytest.org/en/6.2.x/fixture.html
- https://levelup.gitconnected.com/a-comprehensive-guide-to-pytest-3676f05df5a0
There is also @pytest.fixture decorator, which marks a function as a fixture To steal an example from [1], consider:
import pytest
@pytest.fixture
def hello():
return 'hello'
@pytest.fixture
def world():
return 'world'
def test_hello(hello, world):
assert "hello world" == hello + ' ' + world
To keep this the first example short, this is only remembering some values for us and handing them into functions. Which is fairly pointless.
In the real world this is probably mostly useful more useful for setup and teardown.
Consider an example from [2]
@pytest.fixture
def app_without_notes():
app = NotesApp()
return app
@pytest.fixture
def app_with_notes():
app = NotesApp()
app.add_note("Test note 1")
app.add_note("Test note 2")
return app
...which comes from a basic "soooo I spend the first lines of every test just instantiating my application, can't I move that out?"
Doing teardown seems to be done by using a generator (this is a little creative syntax-wise, but lets pytest does most of the work for you)
@pytest.fixture
def app_with_notes(app):
app.add_note("Test note 1")
app.add_note("Test note 2")
yield app # state handed to the test
app.notes_list = [] # clean up test's data
See also https://docs.pytest.org/en/7.1.x/how-to/fixtures.html
On coverage
To do coverage checking at all, add
--cov=dirname/
This mostly gets you a summary.
To see which lines aren't covered, read https://pytest-cov.readthedocs.io/en/latest/reporting.html
If you want it to generate a browsable set of HTML pages, try:
--cov-report html:coverage-report
https://docs.pytest.org/en/latest/how-to/usage.html
On parallel tests
Tests take a while, so it would be nice if you
They should be isolated things, right, right?
It's not a standard feature, presumably so that you don't blame pytest
for bad decisions in threading and nondeterminism, whether it is your own or that in a library you use (consider e.g. that selenium isn't thread-safe).
That said there is:
- pytest-xdist
- pytest-parallel