Skip to content

Commit 7b18702

Browse files
authored
Add link to existing documentation (#17931)
1 parent a045cbd commit 7b18702

File tree

1 file changed

+2
-11
lines changed

1 file changed

+2
-11
lines changed

docs/source/en/big_models.mdx

Lines changed: 2 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -114,15 +114,6 @@ If you want to directly load such a sharded checkpoint inside a model without us
114114
115115
## Low memory loading
116116
117-
Sharded checkpoints reduce the memory usage during step 2 of the worflow mentioned above, but when loadin a pretrained model, why keep the random weights in memory? The option `low_cpu_mem_usage` will destroy the weights of the randomly initialized model, then progressively load the weights inside, then perform a random initialization for potential missing weights (if you are loadding a model with a newly initialized head for a fine-tuning task for instance).
118-
119-
It's very easy to use, just add `low_cpu_mem_usage=True` to your call to [`~PreTrainedModel.from_pretrained`]:
120-
121-
```py
122-
from transformers import AutoModelForSequenceClas
123-
124-
model = AutoModel.from_pretrained("bert-base-cased", low_cpu_mem_usage=True)
125-
```
126-
127-
This can be used in conjunction with a sharded checkpoint.
117+
Sharded checkpoints reduce the memory usage during step 2 of the workflow mentioned above, but in order to use that model in a low memory setting, we recommend leveraging our tools based on the Accelerate library.
128118
119+
Please read the following guide for more information: [Large model loading using Accelerate](./main_classes/model#large-model-loading)

0 commit comments

Comments
 (0)