Pytest notes
| Syntaxish: syntax and language · type stuff · changes and py2/3 · decorators · importing, modules, packages · iterable stuff · concurrency · exceptions, warnings
IO: networking and web · filesystem Data: Numpy, scipy · pandas, dask · struct, buffer, array, bytes, memoryview · Python database notes Image, Visualization: PIL · Matplotlib, pylab · seaborn · bokeh · plotly
Stringy: strings, unicode, encodings · regexp · command line argument parsing · XML Date and time: date and time
speed, memory, debugging, profiling · Python extensions · semi-sorted |
pytest notes
Pytest wants to find tests itself.
This is an inversion of control thing, so no matter how complex tests get, pytest can do everything for us without us having to hook it in specifically.
What files does pytest pick up to be tested?
- the filenames specified
- if none specified:
- filenames named like test_*.py or *_test.py on the directory tree under the current dir(verify)
- You can control this discovery, see e.g. https://docs.pytest.org/en/6.2.x/example/pythoncollection.html
How does pytest decide what code to run as tests?
- functions prefixed test at module scope (verify)
- classes prefixed Test, and then functions prefixed test inside them
- ...but only if that class does not have a constructor (__init__).
- These classes are not intended to be classes with state (and it does not actually instantiate the class(verify))
- just to collect functions (and potentially pollute a namespace less).
- classes subclassed from unittest.TestCase (see unittest)
- by marker, e.g. decorating with @pytest.mark.slow will be picked up by pytest -m slow
- useful to define groups of tests, and run specific subsets only when you want them to
What does pytest actually consider success/failure?
Roughly: each test function will be a
- success:
- if it returns, AND
- if all asserts contained (if any) are successful
- failure: on the first failing assert
- failure: on the first exception
There are "assert for me" functions, including:
- unittest.assertEqual(a, b)
- unittest.assertNotEqual(a, b)
- unittest.assertTrue(x)
- unittest.assertFalse(x)
- unittest.assertIs(a, b)
- unittest.assertIsNot(a, b)
- unittest.assertIsNone(x)
- unittest.assertIsNotNone(x)
- unittest.assertIn(a, b)
- unittest.assertNotIn(a, b)
- unittest.assertIsInstance(a, b)
- unittest.assertNotIsInstance(a, b)
...but many of those are shorter to write in your own assert
How do I test that something should throw an exception(/warning)?
Has a few alternatives, also varying a little with whether you're testing that it should raise an error or that it doesn't.
The context manager form seems a brief-and-more-flexible way to filter for a specific error type and specific error text:
with pytest.raises(ValueError, match=r'.*found after.*'):
# code that whines about some value parsing
You could also e.g. catch the specific error as you normally would, and
If you want to add a more useful message, you can then use pytest.fail to signal that is a failure.
try:
calculate_average( data )
except ZeroDivisionError as exc:
pytest.fail("a division by zero in this code suggests preprocessing removed all cases")
If you were testing that it does
try:
calculate_average( data )
pytest.fail("test should have borked out with a division by zero")
except ZeroDivisionError as exc:
pass # this is what we want
...but consider the context manager form mentioned above, its signaled intent is often clearer.
For warnings, pytest.warns can be used as a context manager that works much the same as that exception one above
- and the context object variant is probably easiest here
- note: warnings.warn() by default emits a UserWarning - see https://docs.python.org/3/library/warnings.html#warning-categories
with pytest.warns(UserWarning, match=r'.*deprecated.*'): #
How do I fake that something isn't installed
That is, if you wrote both
- "test use of module A" and
- "if module A isn't installed, test the fallback code"
code, it makes sense to test for both.
Knowing that import looks at sys.modules,
and if it's there won't do any importing, only name-bind what is referenced there, you might think to remove it from there.
Good thought, but not enough, because whether a direct import or some other importing code in someone's module, a lot of things will just lead to trying to load it from disk again, and there's no particular reason that that import wouldn't work.
So arguably the most correct way is to temporarily monkey patch python's importing, to fail selectively, before any of that happens, e.g.:
from pytest import monkeypatch
def test_fallback(monkeypatch):
" Pretend tqdm isn't installed, see if it falls back"
## monkey patching to pretend something cannot be imported (even if it is already in sys.modules)
import builtins
real_import = builtins.__import__
def filtering_import(name, *args, **kwargs):
if name in ('tqdm',):
raise ModuleNotFoundError(name)
return real_import(name, *args, **kwargs)
monkeypatch.setattr(builtins, "__import__", filtering_import)
with pytest.raises(ImportError):
import tqdm
# and e.g. test code that falls back to something else
Note that this uses pytest's own monkeypatch, preferred in tests because it's reversible.
How do I fake that something is installed
Showing details
For example, asserts on simpler comparisons will lead pytest to pick up the values that didn't compare as you wanted:
- comparing long strings: a context diff is shown
- comparing long sequences: first failing index is shown
- comparing dicts: different entries are shown
This can be customized, which is sometimes worth it to get more useful output from pytest runs.
On fixtures/mocking
As a concept, fixtures create reusable state / helper code for tests, and are great if you use the same data/objects.
In pytest,
- they are code that wraps your function,
- and can be hooked in just by mentioning some specific argument names.
pytest has a few different things you could call fixtures.
tmp_path
Having a test function with a specifically named keyword, you get in some extra behaviour when pytest runs this test. Consider:
def test_download_to_file( tmp_path ):
tofile_path = tmp_path / "testfile" # this syntax works because tmp_path is a pathlib.Path object
download('https://www.example.com', tofile_path=tofile_path)
assert os.path.exists( tofile_path )
tmp_path means "create directory for you, hand it in for you to use, and we will clean it up afterwards", which is a great helper you would otherwise have to write yourself (and test in itself).
For some other given fixtures, see e.g.
- https://docs.pytest.org/en/6.2.x/fixture.html
- https://levelup.gitconnected.com/a-comprehensive-guide-to-pytest-3676f05df5a0
monkeypatch
As a concept, monkey patching is about editing behaviour live.
pytest has a monkeypatch module that lets you
- override object behaviour for a specific test function
- hand something called monkeypatch into the test function
- override object behaviour within a code block (with monkeypatch.context())
This module seems to aim for things that can also be reliably undone,
which is why it only lets you override only certain things,
and even then there are footnotes.
You mainly get
- monkeypatch.setenv, monkeypatch.delenv
- e.g. monkeypatch.setenv('USER', 'TestUser')
- object attribute: monkeypatch.setattr, monkeypatch.delattr
- e.g. delattr("requests.sessions.Session.request")
- dictionary item: monkeypatch.setitem, monkeypatch.delitem
- syspath_prepend:
- chdir :
Notes:
- There are certain things that pytest itself also uses, so would break pytest even if just for that function
- so don't try to monkey-patch the standard library, pytest itself, open, compile, and such.
See also:
fixture
On coverage
To do coverage checking at all, add
--cov=dirname/
This mostly gets you a summary.
To see which lines aren't covered, read https://pytest-cov.readthedocs.io/en/latest/reporting.html
If you want it to generate a browsable set of HTML pages, try:
--cov-report html:coverage-report
https://docs.pytest.org/en/latest/how-to/usage.html