-
Notifications
You must be signed in to change notification settings - Fork 40
Split Testcase.run()
into run()
and validate()
methods
#102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This is used to validate variables and timers.
TestingI ran the |
@vanroekel and @mark-petersen, this was one of @matthewhoffman's suggestions during the review of #28. Especially after implementing it, I'm quite convinced that it's a good change. I'm not looking for a particularly detailed review from any of you. I mostly want to get your thumbs up about this split and make sure you're aware of it going forward. Also, I want your thumbs up on the change to the test suite output described above. It's a bit more verbose but I think it will be helpful to know if a failure is in the test itself, internal validation (e.g. between outputs in different steps like restart or decomposition testing) or against the baseline. I've also found the single-line output introduced in #63 to be hard to read, so I think this multiline format works better for me. Here's an example where the additional info is potentially helpful:
|
@xylar , this looks really nice to me. I also like the multiline output format and separating the 3 pieces of information. I'm a little fuzzy on the meaning for the |
@matthewhoffman, yes, your interpretation is exactly right and I agree that your wording is an improvement. I'll make that change. |
Also, switch to SUCCESS/ERROR in test execution
I do think that's an improvement, see the main description above. |
Ooh, ooh, fun! I just figured out how to make |
Ha! I was almost going to suggest that but decided not to go there. |
@xylar I'm struggling a little bit with what each line means. if |
@vanroekel, I think we are open to suggestions for how to clarify further. Some test cases only supply one file ( Other test cases like restart, thread and decomposition tests compare output between two different steps. This is what is meant by |
Previously, even if no internal validation was performed, the "test validation" message was showing up. We only want a message when a test case includes validation between internal steps (as opposed to against a baseline).
@vanroekel, in the process of looking into your question, I realized I made a mistake in how With my last commit, output of the nightly suite looks like this: nightly output:
|
Ah, I see now @xylar Now that I understand, I think what you have for each step is perfect. I was just confused as in my interpretation we in effect overload things like If no baseline is provided the nightly output would have a pass/fail for |
If no baseline is provided, To illustrate this, currently, the only test that is failing is the RK4 restart test. It fails because of test validation (the full run is not bit-for-bit with the restart run). If you compare it with a baseline, |
Thanks again @xylar for taking time to explain this, sorry for the obvious questions. This makes complete sense to me now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a great change. I really like the new more detailed output in the nightly to see if the test fails in setup/execution or the validation. Thanks @xylar!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is great! I love the colors!
I tested the nightly regression suite with and without baseline comparisons, and made it fail in various ways. It all worked as expected. This is a big help in understanding what PASS meant in previous testing.
@mark-petersen, thanks for taking the time to review this (and the others) on a weekend! @vanroekel and @matthewhoffman, thank you as well for your prompt reviews! |
The
run()
method previously included validation of variables and timers, but things are cleaner in a few ways if running steps and validation are kept separate:run()
method at all, saving a (somewhat confusing) call tosuper().run()
. SinceTestCase.validate()
does nothing, there is no need to callsuper().validate()
.run()
, it is now clearer thatsuper().run()
should be called at the end of the newrun()
method to run the steps (e.g. after altering the number of cores that each step should run with based on config options).All test cases have been updated with this split.
The output from test suites has been altered to indicate if the test, the internal validation, and the comparison with the baseline
passed:
The documentation has been updated to reflect these changes.
closes #74