-
-
Notifications
You must be signed in to change notification settings - Fork 635
Description
🚀 feature request
Relevant Rules
This would be adding a new pytest_test rule or macro.
Description
Lots of people use pytest, and pytest offers a lot of features that can be made to work nicely with bazel, including test sharding and XML unit test generation. However, it's a bit awkward to use with a py_test rule. The basic way to do it is to do something like adding this to your foo_test.py:
if __name__ == "__main__":
raise SystemExit(
pytest.main(
(
# pylint: disable=used-before-assignment
["--junitxml=" + xml, "--color=yes"]
if (xml := os.getenv("XML_OUTPUT_FILE"))
else []
)
+ [
"--strict-markers",
__file__,
],
)
)There are, however, a couple of problems with this approach. It requires copy/pasting a relatively large blob of code into every test file, results in silently skipping tests if you omit it, and if your code under test does any messing around with import machinery (which is probably not a good idea, but that's a separate issue) or otherwise has non-trivial side-effecting imports that may behave differently when running under test (also probably not great practice, but sometimes helpful for e.g. injecting mocks) then that may not work as intended if the decision to execute under pytest happens after imports have already been processed.
Describe the solution you'd like
EngFlow has published what I'd consider to be a decent starting point for how to do this, which shows off a few useful features including support for sharding. It doesn't add the XML output, but that's easy to fix. There are, however, two deeper problems with it, in my mind:
- Relying purely on a macro injecting the sources to test as
argsis a bit problematic because it prevents one frombazel building the target and then running it outside ofbazel, e.g. under a debugger. - Ideally, it would depend on
pytestvia a toolchain configuration. Note that while most of the time the test code in question will also depend onpytest, so it should be there in the dependency set anyway, it wouldn't necessarily have such a dependency -pytestis perfectly happy to run some test code that doesn't import anything special and just has a fewassertstatements scattered around.
A better implementation would therefore take that pytest_runner.py and make it a template, with a rule to expand the srcs into it, and to bring in the dependency on pytest via the toolchain. It would probably make the most sense to still use a macro to then wrap that in a py_test rule to avoid replicating all of that machinery.
We've actually mostly implemented this internally already, sans the toolchain bits (for now at least; we just hard-coded pytest because we know the target name in our hermetic python environment). We'd be happy to contribute this upstream if people are happy with the general design.
Bonus points for getting the gazelle plugin to detect the use of pytest and generate targets appropriately. This would include detecting the implicit dependency on conftest.py and/or pytest.ini.
Describe alternatives you've considered
We could use a naked rule rather than a macro wrapping the generator and a py_test. That would have the benefit of not generating "hidden" targets which need non-colliding names, but has the downside that it would require replicating much of the work done by the py_test rule. It's also an ergonmic issue if someone wanted to do something like bazel query 'kind(py_test, //...)' to find all of the python test code - I'd expect them to want to actually get all of those results.
We could just keep EngFlow's macro-based implementation. That keeps things simple but has the downside of not working with toolchains as well as the aforementioned issues with args not taking effect when running outside of bazel test/bazel run.
A first-pass implementation could require the user to add the pytest dependency explicitly, rather than going to the trouble of setting up a toolchain configuration. That would probably be fine in most cases.
conftest.py and/or pytest.ini could be part of the toolchain configuration, rather than being added as explicit dependencies for each pytest_test target. That would be convenient, but problematic for larger projects which might have multiple such files covering different subtrees of their code base.