-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Move predictions to CPU before accumulating #9085
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move predictions to CPU before accumulating #9085
Conversation
tchaton
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM !
Note: I believe we shouldn't do this before the on_predict_batch_end as some users might do some post processing on device.
Codecov Report
@@ Coverage Diff @@
## master #9085 +/- ##
=======================================
- Coverage 92% 88% -4%
=======================================
Files 178 178
Lines 14692 14693 +1
=======================================
- Hits 13524 12918 -606
- Misses 1168 1775 +607 |
Co-authored-by: Adrian Wälchli <[email protected]>
carmocca
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Missing the torch import
What does this PR do?
Moves predictions to CPU before accumulating.
See discussion: #7485
Does your PR introduce any breaking changes? If yes, please list them.
Predictions are stored on cpu instead of device.
Before submitting
PR review
Anyone in the community is welcome to review the PR.
Before you start reviewing make sure you have read Review guidelines. In short, see the following bullet-list:
Did you have fun?
Make sure you had fun coding 🙃