- 
        Couldn't load subscription status. 
- Fork 7.2k
Refactor test_models to use pytest #3697
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor test_models to use pytest #3697
Conversation
| return expected_file | ||
|  | ||
| def assertExpected(self, output, subname=None, prec=None, strip_suffix=None): | ||
| def assertExpected(self, output, name, prec=None): | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to tweak this a little because  in the previous version, the check name was based on self.id(), which changes now that the tests are parametrized with pytest.
In fact I simplified it a bit because all what assertExpected needs to know is the name of the model, so I removed strip_suffix (which we don't need anymore) and also subname which was actually never used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As a general note I would like us to re-use as much as possible the testing utils from PyTorch, if it makes sense. This was originally taken from PyTorch tests, and now it is exposed via torch.testing I believe
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a ton for working on this!
| f"No expect file exists for {os.path.basename(expected_file)} in {expected_file}; " | ||
| "to accept the current output, run:\n" | ||
| f"python {__main__.__file__} {munged_id} --accept") | ||
|  | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The munged_id variable in the exact above line should have also been removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll submit a fix
| # where we need to create a new expected result. | ||
| self.assertExpected(output, prec=prec, strip_suffix=strip_suffix) | ||
| self.assertExpected(output, name, prec=prec) | ||
| raise AssertionError | 
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NicolasHug I think this was left here by accident. This will cause the tests to be considered partially validated and marked as skipped.
Summary: * refactor test_models to use pytest * Also xfail the detection models * Remove xfail and just comment out expected failing parts * Comment out some more * put back commented checks * cleaning + comment * docs * void unnecessary changes * r2plus1d_18 seems to segfault on linux gpu?? * put back test, failure is unrelated Reviewed By: NicolasHug Differential Revision: D28169138 fbshipit-source-id: dc1332bf48a0fdf51158efc401df9c82a83e76f4 Co-authored-by: Francisco Massa <[email protected]>
Pytest is now supported in the FB internal repo. We can start parametrizing tests!
This PR cleans up the
test_modelstests by relying on pytest.