Skip to content

Conversation

nicoddemus
Copy link
Member

@nicoddemus nicoddemus commented Sep 20, 2025

This PR copies the files from pytest-subtests and performs minimal integration.
I'm opening this to gauge whether everyone is on board with integrating this feature into the core.

Why?

Pros

  • subtests is a standard unittest feature, so it makes sense for pytest to support it as well.
  • Provides a simple alternative to parametrization.
  • Adds the ability to generate new test cases at runtime during the test execution, which is not possible with parametrization.
  • While it can exist as an external plugin, it requires many hacks, and better report integration is not easily achievable without core integration (off the top of my head: issues with terminal reporting, last failed and stepwise support, among others).

Cons

  • Adds another maintenance burden to the core.

TODO ✅

If everyone is on board, I will take the time this week to polish it and get it ready to merge ASAP:

  • Cleanup the implementation: Currently it relies on monkey-patching, which should no longer be needed since we can modify the core code directly.
  • Documentation: The feature should be documented as experimental -- meaning that while the feature itself is solid, we are still working out details such as how to best report subtest failures, integration with other plugins, etc.
  • Fix lint and testing failures (obviously).

Related

@nicoddemus nicoddemus changed the title [DRAFT] Integrate pytest subtests [DRAFT] Integrate pytest-subtests Sep 20, 2025
@Pierre-Sassoulas
Copy link
Member

I agree that this feature should be in pytest core because it's a feature in unittest and pytest should aim to be a drop in replacement for unittest (plus the original issue have 40 👍 and no 👎 at time of writing, overwhelming popular support in my book).

@RonnyPfannschmidt
Copy link
Member

I recall we need to fix marking the owning test case as failed if one subtest fails

@RonnyPfannschmidt
Copy link
Member

But yeah i want to see this in

@webknjaz
Copy link
Member

Sounds reasonable

@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from b43ab38 to 97ee032 Compare September 22, 2025 23:07
@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from 97ee032 to 6b5831f Compare September 26, 2025 13:34
@psf-chronographer psf-chronographer bot added the bot:chronographer:provided (automation) changelog entry is part of PR label Sep 26, 2025
@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch 2 times, most recently from 5f56d81 to c93c0e0 Compare September 26, 2025 14:00
@nicoddemus nicoddemus marked this pull request as ready for review September 26, 2025 14:00
@nicoddemus nicoddemus changed the title [DRAFT] Integrate pytest-subtests Integrate pytest-subtests Sep 26, 2025
@nicoddemus
Copy link
Member Author

Ready for an initial review folks.

@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from c93c0e0 to b569c93 Compare September 29, 2025 22:17
Copy link
Member

@bluetech bluetech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice to see this happening.

I ran out of time for the review for today, so didn't really get to the implementation parts, but already have some comments so submitting a partial review.

---------------------------

While :ref:`traditional pytest parametrization <parametrize>` and ``subtests`` are similar, they have important differences and use cases.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Continuing from the comment above, I can see two ways we can approach this:

1 - Subtests are for "runtime parametrization"

Per comment above, subtests are useful when you have some data dynamically fetched in the test and want individualized reporting for each data value.

The idea here is that parametrization should be the go-to tool, but we offer this subtest tool for this particular scenario.

2 - Subtests are for "sub-testing"

By which I mean, subtests are for when you have one conceptual test, i.e. consider it a complete whole, but just want to break down its reporting to parts.


How do you see it? The reason I'm asking is that it can affect how we document the feature, what we recommend, etc.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think of subtests specially for the first case, but the 2nd case is also useful. Say you test 3 different objects in the test, and would like to see the test results for all the objects, even if the very first fails. Similar use case for pytest-check.

How do you suggest we approach those cases here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pytest-check is deferred assertions with a worse api
it would be neat to have subtests.deferred_assertions() and allowing people to use the assert statement there

as for the real case of action grouping - whene doing something like a larger acceptance test(as for example done in moinmoin wiki) the goal is to report sections

a way for a section failure to directly bubble trough to fail the full test would be helpful

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually didn't think of the "want to see all failures" use case. So it would be nice to document :)

I think my preferred approach to documenting subtests is not as alternative to parametrization, but as useful for a bunch of use cases which the reader can read and add to their "tool set". That's just my idea.

@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from b569c93 to 506c75b Compare October 11, 2025 21:23
@nicoddemus nicoddemus requested a review from bluetech October 11, 2025 21:23
@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from 506c75b to 42f910d Compare October 11, 2025 21:27
@nicoddemus nicoddemus force-pushed the integrate-pytest-subtests branch from 42f910d to ec3144f Compare October 11, 2025 21:32
@RonnyPfannschmidt
Copy link
Member

we might want to bikeshed a api around sub-section, subtests and parameter loops a little - not necessarily for doing right now - but for setting up a roadmap

@nicoddemus
Copy link
Member Author

I was hoping to get this feature into 9.0, if possible.

@RonnyPfannschmidt
Copy link
Member

we absolutely want this in 9.0

we shouldnt change the api to ensure compat with existing users
we shoul dplan for mkaing some things nicer

we should try to ensure lastfailed does rerun tests with failed subtests before releasing 9.0

@RonnyPfannschmidt RonnyPfannschmidt added this to the 9.0 milestone Oct 12, 2025
@nicoddemus
Copy link
Member Author

we should try to ensure lastfailed does rerun tests with failed subtests before releasing 9.0

I will add a test, good call.

Copy link
Member

@bluetech bluetech left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately ran out of time to do a full review again, but left some comments.

I also noticed two things while I was testing it:


If some or all subtests fail, the parent test is still PASSED, is this intentional?

image

For some reason the errors are shown as "Captured log call", seems wrong as there are not log calls in my test.

image

Regarding the name SUBPASS etc., I saw that regular pass is PASSED so I wonder it shouldn't be SUBPASSED etc? (I can check the code later).

.. code-block:: python

def test(subtests):
for i in range(5):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

range(5) is better as regular parametrize test, I wonder if we can use an example where we would actually recommend using subtests.

Here is the best I could come up with after thinking a bit, you could probably come up with something better:

import pytest
from pathlib import Path


def test_py_paths_are_files(subtests: pytest.Subtests) -> None:
    for path in Path.cwd().glob('*.py'):
        with subtests.test(path=str(path)):
            assert path.is_file()

with subtests.test(msg="custom message", i=i):
assert i % 2 == 0

Each assertion failure or error is caught by the context manager and reported individually:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would also be good to explain the reporting a bit, it took me a bit to understand:

  • The , report char
  • That the subtests are reported first and "top-level" test is reported at the end on its own
  • That failures are shown as SUBFAIL

---------------------------

While :ref:`traditional pytest parametrization <parametrize>` and ``subtests`` are similar, they have important differences and use cases.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I actually didn't think of the "want to see all failures" use case. So it would be nice to document :)

I think my preferred approach to documenting subtests is not as alternative to parametrization, but as useful for a bunch of use cases which the reader can read and add to their "tool set". That's just my idea.

from typing_extensions import Self


def pytest_addoption(parser: Parser) -> None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Can you explain the rationale for these options -- did someone ask for them in pytest-subtests?

  • I admit the difference between the options is a bit obscure, I wonder if they can't be folded into a single option?

  • I wonder if this shouldn't be an ini option rather than a cli option? I imagine someone would want to set this if it's too noisy, in which case it more of a project setting. But maybe it's more of a user-preference thing, then it's a CLI flag...

  • Maybe it should be configurable at the test level, using the fixture itself? (Probably not)

  • Should we document the options in the tutorial?


def test(
self,
msg: str | None = None,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In all of the examples we use subtest.test(msg="...") with msg a named parameter. Do we want to enforce it? Or allow subtest.test("...").

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wed have to deprecate

Its supported in subtests as of now

parts.append(f"[{self.context.msg}]")
if self.context.kwargs:
params_desc = ", ".join(
f"{k}={v!r}" for (k, v) in sorted(self.context.kwargs.items())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why sorted? I recon it would be good to keep the user-provided order.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might have been necessary in the far past

return Subtests(request.node.ihook, suspend_capture_ctx, request, _ispytest=True)


# Note: cannot use a dataclass here because Sphinx insists on showing up the __init__ method in the documentation,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBH I don't think this should be a dataclass anyway, it's not really "data". So I'd just remove the comment.

Context manager for subtests, capturing exceptions raised inside the subtest scope and handling
them through the pytest machinery.

Note: initially this logic was implemented directly in Subtests.test() as a @contextmanager, however
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be a regular comment, not a docstring comment.

Comment on lines +233 to +234
sub_report = SubtestReport._from_test_report(report)
sub_report.context = SubtestContext(msg=self.msg, kwargs=self.kwargs.copy())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe pass context as a parameter to _from_text_report and do the assignment internally, to keep it encapsulated?

return True


def make_call_info(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd inline this function, it doesn't seem very useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bot:chronographer:provided (automation) changelog entry is part of PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants