Skip to content

Conversation

@fmassa
Copy link
Member

@fmassa fmassa commented Sep 16, 2019

Fixes #1335

@Anjum48 can you try this out and let me know if it works for you?

@codecov-io
Copy link

codecov-io commented Sep 16, 2019

Codecov Report

Merging #1341 into master will increase coverage by 0.08%.
The diff coverage is 75%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #1341      +/-   ##
==========================================
+ Coverage    65.5%   65.59%   +0.08%     
==========================================
  Files          75       75              
  Lines        5819     5821       +2     
  Branches      892      892              
==========================================
+ Hits         3812     3818       +6     
+ Misses       1737     1736       -1     
+ Partials      270      267       -3
Impacted Files Coverage Δ
torchvision/models/detection/roi_heads.py 55.77% <0%> (-0.16%) ⬇️
torchvision/models/detection/rpn.py 79.32% <100%> (+0.09%) ⬆️
torchvision/transforms/transforms.py 80.98% <0%> (+0.98%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d7e88fb...8534bf2. Read the comment docs.

@fmassa
Copy link
Member Author

fmassa commented Sep 18, 2019

@Anjum48 this now works with mixed precision.

I tried to minimize the number of casts, but not all tensors are fp16 (as it is mixed precision).

@fmassa fmassa merged commit 5d5d425 into pytorch:master Sep 18, 2019
@fmassa fmassa deleted the fix-anchor-device branch September 18, 2019 17:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

MultiScaleRoIAlign creates a type mismatch when using mixed precision training with Apex

2 participants