Skip to content

Commit fc95f00

Browse files
SeanNarenBorda
authored andcommitted
Disable CPU Offload as default for DeepSpeed (#6262)
* Change default for CPU offload to false for best throughput/memory efficiency * Add changelog * default Co-authored-by: Jirka Borovec <[email protected]>
1 parent ad61624 commit fc95f00

File tree

2 files changed

+5
-2
lines changed

2 files changed

+5
-2
lines changed

CHANGELOG.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
1818
- Changed the order of `backward`, `step`, `zero_grad` to `zero_grad`, `backward`, `step` ([#6147](https://github.com/PyTorchLightning/pytorch-lightning/pull/6147))
1919

2020

21+
- Changed default for DeepSpeed CPU Offload to False, due to prohibitively slow speeds at smaller scale ([#6262](https://github.com/PyTorchLightning/pytorch-lightning/pull/6262))
22+
23+
2124
### Deprecated
2225

2326

pytorch_lightning/plugins/training_type/deepspeed.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ def __init__(
6666
self,
6767
zero_optimization: bool = True,
6868
stage: int = 2,
69-
cpu_offload: bool = True,
69+
cpu_offload: bool = False,
7070
contiguous_gradients: bool = True,
7171
overlap_comm: bool = True,
7272
allgather_partitions: bool = True,
@@ -99,7 +99,7 @@ def __init__(
9999
stage: Different stages of the ZeRO Optimizer. 0 is disabled,
100100
1 is optimizer state partitioning, 2 is optimizer+gradient state partitioning (default: 2)
101101
102-
cpu_offload: Enable offloading optimizer memory and computation to CPU (default: True)
102+
cpu_offload: Enable offloading optimizer memory and computation to CPU
103103
104104
contiguous_gradients: Copies gradients to a continuous buffer as they are produced.
105105
Avoids memory fragmentation during backwards. Useful when training large models. (default: True)

0 commit comments

Comments
 (0)