You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/advanced/mixed_precision.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,14 +50,14 @@ BFloat16 Mixed precision is similar to FP16 mixed precision, however we maintain
50
50
Since BFloat16 is more stable than FP16 during training, we do not need to worry about any gradient scaling or nan gradient values that comes with using FP16 mixed precision.
51
51
52
52
.. testcode::
53
-
:skipif: not _TORCH_GREATER_EQUAL_DEV_1_10 or not torch.cuda.is_available()
53
+
:skipif: not _TORCH_GREATER_EQUAL_1_10 or not torch.cuda.is_available()
54
54
55
55
Trainer(gpus=1, precision="bf16")
56
56
57
57
It is also possible to use BFloat16 mixed precision on the CPU, relying on MKLDNN under the hood.
0 commit comments