-
-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Integrate pytest-subtests #13738
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Integrate pytest-subtests #13738
Conversation
I agree that this feature should be in pytest core because it's a feature in unittest and pytest should aim to be a drop in replacement for unittest (plus the original issue have 40 👍 and no 👎 at time of writing, overwhelming popular support in my book). |
I recall we need to fix marking the owning test case as failed if one subtest fails |
But yeah i want to see this in |
Sounds reasonable |
b43ab38
to
97ee032
Compare
97ee032
to
6b5831f
Compare
5f56d81
to
c93c0e0
Compare
Ready for an initial review folks. |
c93c0e0
to
b569c93
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice to see this happening.
I ran out of time for the review for today, so didn't really get to the implementation parts, but already have some comments so submitting a partial review.
--------------------------- | ||
|
||
While :ref:`traditional pytest parametrization <parametrize>` and ``subtests`` are similar, they have important differences and use cases. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Continuing from the comment above, I can see two ways we can approach this:
1 - Subtests are for "runtime parametrization"
Per comment above, subtests are useful when you have some data dynamically fetched in the test and want individualized reporting for each data value.
The idea here is that parametrization should be the go-to tool, but we offer this subtest tool for this particular scenario.
2 - Subtests are for "sub-testing"
By which I mean, subtests are for when you have one conceptual test, i.e. consider it a complete whole, but just want to break down its reporting to parts.
How do you see it? The reason I'm asking is that it can affect how we document the feature, what we recommend, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think of subtests specially for the first case, but the 2nd case is also useful. Say you test 3 different objects in the test, and would like to see the test results for all the objects, even if the very first fails. Similar use case for pytest-check.
How do you suggest we approach those cases here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pytest-check is deferred assertions with a worse api
it would be neat to have subtests.deferred_assertions()
and allowing people to use the assert statement there
as for the real case of action grouping - whene doing something like a larger acceptance test(as for example done in moinmoin wiki) the goal is to report sections
a way for a section failure to directly bubble trough to fail the full test would be helpful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually didn't think of the "want to see all failures" use case. So it would be nice to document :)
I think my preferred approach to documenting subtests is not as alternative to parametrization, but as useful for a bunch of use cases which the reader can read and add to their "tool set". That's just my idea.
In addition, enable the plugin in `pytest/__init__.py` and `config/__init__.py`.
b569c93
to
506c75b
Compare
506c75b
to
42f910d
Compare
42f910d
to
ec3144f
Compare
we might want to bikeshed a api around sub-section, subtests and parameter loops a little - not necessarily for doing right now - but for setting up a roadmap |
I was hoping to get this feature into 9.0, if possible. |
we absolutely want this in 9.0 we shouldnt change the api to ensure compat with existing users we should try to ensure lastfailed does rerun tests with failed subtests before releasing 9.0 |
I will add a test, good call. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately ran out of time to do a full review again, but left some comments.
I also noticed two things while I was testing it:
If some or all subtests fail, the parent test is still PASSED, is this intentional?

For some reason the errors are shown as "Captured log call", seems wrong as there are not log calls in my test.

Regarding the name SUBPASS
etc., I saw that regular pass is PASSED
so I wonder it shouldn't be SUBPASSED
etc? (I can check the code later).
.. code-block:: python | ||
|
||
def test(subtests): | ||
for i in range(5): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
range(5)
is better as regular parametrize test, I wonder if we can use an example where we would actually recommend using subtests.
Here is the best I could come up with after thinking a bit, you could probably come up with something better:
import pytest
from pathlib import Path
def test_py_paths_are_files(subtests: pytest.Subtests) -> None:
for path in Path.cwd().glob('*.py'):
with subtests.test(path=str(path)):
assert path.is_file()
with subtests.test(msg="custom message", i=i): | ||
assert i % 2 == 0 | ||
|
||
Each assertion failure or error is caught by the context manager and reported individually: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it would also be good to explain the reporting a bit, it took me a bit to understand:
- The
,
report char - That the subtests are reported first and "top-level" test is reported at the end on its own
- That failures are shown as SUBFAIL
--------------------------- | ||
|
||
While :ref:`traditional pytest parametrization <parametrize>` and ``subtests`` are similar, they have important differences and use cases. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually didn't think of the "want to see all failures" use case. So it would be nice to document :)
I think my preferred approach to documenting subtests is not as alternative to parametrization, but as useful for a bunch of use cases which the reader can read and add to their "tool set". That's just my idea.
from typing_extensions import Self | ||
|
||
|
||
def pytest_addoption(parser: Parser) -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Can you explain the rationale for these options -- did someone ask for them in
pytest-subtests
? -
I admit the difference between the options is a bit obscure, I wonder if they can't be folded into a single option?
-
I wonder if this shouldn't be an ini option rather than a cli option? I imagine someone would want to set this if it's too noisy, in which case it more of a project setting. But maybe it's more of a user-preference thing, then it's a CLI flag...
-
Maybe it should be configurable at the test level, using the fixture itself? (Probably not)
-
Should we document the options in the tutorial?
|
||
def test( | ||
self, | ||
msg: str | None = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In all of the examples we use subtest.test(msg="...")
with msg
a named parameter. Do we want to enforce it? Or allow subtest.test("...")
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wed have to deprecate
Its supported in subtests as of now
parts.append(f"[{self.context.msg}]") | ||
if self.context.kwargs: | ||
params_desc = ", ".join( | ||
f"{k}={v!r}" for (k, v) in sorted(self.context.kwargs.items()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why sorted
? I recon it would be good to keep the user-provided order.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might have been necessary in the far past
return Subtests(request.node.ihook, suspend_capture_ctx, request, _ispytest=True) | ||
|
||
|
||
# Note: cannot use a dataclass here because Sphinx insists on showing up the __init__ method in the documentation, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBH I don't think this should be a dataclass anyway, it's not really "data". So I'd just remove the comment.
Context manager for subtests, capturing exceptions raised inside the subtest scope and handling | ||
them through the pytest machinery. | ||
|
||
Note: initially this logic was implemented directly in Subtests.test() as a @contextmanager, however |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be a regular comment, not a docstring comment.
sub_report = SubtestReport._from_test_report(report) | ||
sub_report.context = SubtestContext(msg=self.msg, kwargs=self.kwargs.copy()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe pass context
as a parameter to _from_text_report
and do the assignment internally, to keep it encapsulated?
return True | ||
|
||
|
||
def make_call_info( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd inline this function, it doesn't seem very useful.
This PR copies the files from
pytest-subtests
and performs minimal integration.I'm opening this to gauge whether everyone is on board with integrating this feature into the core.
Why?
Pros
subtests
is a standardunittest
feature, so it makes sense for pytest to support it as well.Cons
TODO ✅
If everyone is on board, I will take the time this week to polish it and get it ready to merge ASAP:
Related