Skip to content

Commit b741e3e

Browse files
committed
update general optim sections
1 parent a3dbe0f commit b741e3e

File tree

5 files changed

+104
-118
lines changed

5 files changed

+104
-118
lines changed

docs/source/en/optimization/fp16.md

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -12,9 +12,15 @@ specific language governing permissions and limitations under the License.
1212

1313
# Speed up inference
1414

15-
We present some techniques and ideas to optimize 🤗 Diffusers for inference speed. As a general rule, we recommend the use of [xFormers](https://github.com/facebookresearch/xformers) for memory efficient attention, please see the recommended [installation instructions](xformers).
15+
There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either [xFormers](xformers) or `torch.nn.functional.scaled_dot_product_attention` in PyTorch 2.0 for their memory-efficient attention.
1616

17-
We'll discuss how the following settings impact performance and memory. The results below are obtained from generating a single 512x512 image on the prompt `a photo of an astronaut riding a horse on mars` with 50 DDIM steps on a Nvidia Titan RTX.
17+
<Tip>
18+
19+
In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the [Reduce memory usage](memory) guide.
20+
21+
</Tip>
22+
23+
The results below are obtained from generating a single 512x512 image from the prompt `a photo of an astronaut riding a horse on mars` with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed up you can expect.
1824

1925
| | Latency | Speedup |
2026
| ---------------- | ------- | ------- |
@@ -24,21 +30,21 @@ We'll discuss how the following settings impact performance and memory. The resu
2430
| traced UNet | 3.21s | x2.96 |
2531
| memory efficient attention | 2.63s | x3.61 |
2632

27-
## Use tf32 instead of fp32
33+
## Use TensorFloat-32
2834

29-
On Ampere and later CUDA devices, matrix multiplications and convolutions can use the [TensorFloat-32 (TF32)](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling this setting for matrix multiplications. It can significantly speed up computations with typically negligible loss in numerical accuracy. Enable TF32 by:
35+
On Ampere and later CUDA devices, matrix multiplications and convolutions can use the [TensorFloat-32 (TF32)](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speed up computations with typically negligible loss in numerical accuracy.
3036

3137
```python
3238
import torch
3339

3440
torch.backends.cuda.matmul.allow_tf32 = True
3541
```
3642

37-
Learn more about TF32 in [Mixed precision training](https://huggingface.co/docs/transformers/en/perf_train_gpu_one#tf32).
43+
You can learn more about TF32 in the [Mixed precision training](https://huggingface.co/docs/transformers/en/perf_train_gpu_one#tf32) guide.
3844

3945
## Half-precision weights
4046

41-
To save GPU memory and get more speed, you can load and run the model weights directly in half-precision. This involves loading the float16 version of the weights:
47+
To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16:
4248

4349
```Python
4450
import torch

docs/source/en/optimization/memory.md

Lines changed: 36 additions & 31 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,31 @@
11
# Reduce memory usage
22

3-
A major challenge in using diffusion models is the large amount of memory required. To overcome this barrier, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier Colabs or consumer GPUs. Some of these techniques can even be combined together to further reduce memory usage!
3+
A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage.
44

5+
<Tip>
6+
7+
In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to [Speed up inference](fp16).
8+
9+
</Tip>
10+
11+
The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption.
512

6-
| | Latency | Speedup |
13+
| | latency | speed-up |
714
| ---------------- | ------- | ------- |
815
| original | 9.50s | x1 |
916
| fp16 | 3.61s | x2.63 |
1017
| channels last | 3.30s | x2.88 |
1118
| traced UNet | 3.21s | x2.96 |
12-
| memory efficient attention | 2.63s | x3.61 |
19+
| memory-efficient attention | 2.63s | x3.61 |
1320

1421

1522
## Sliced VAE
1623

17-
To decode large batches of images with limited VRAM, or to enable batches with 32 images or more, you can use sliced VAE to decode the batches of latents one image at a time.
24+
Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You'll likely want to couple this with [`~ModelMixin.enable_xformers_memory_efficient_attention`] to further reduce memory use.
1825

19-
You likely want to couple this with [`~ModelMixin.enable_xformers_memory_efficient_attention`] to further reduce memory use.
26+
To use sliced VAE, call [`~StableDiffusionPipeline.enable_vae_slicing`] on your pipeline before inference:
2027

21-
To use sliced VAE to decode one image at a time, call [`~StableDiffusionPipeline.enable_vae_slicing`] in your pipeline before inference
22-
23-
```Python
28+
```python
2429
import torch
2530
from diffusers import StableDiffusionPipeline
2631

@@ -40,11 +45,9 @@ You may see a small performance boost in VAE decoding on multi-image batches, an
4045

4146
## Tiled VAE
4247

43-
Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images in 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image.
48+
Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with [`~ModelMixin.enable_xformers_memory_efficient_attention`] to further reduce memory use.
4449

45-
You should also used tiled VAE with [`~ModelMixin.enable_xformers_memory_efficient_attention`] to further reduce memory use.
46-
47-
To use tiled VAE processing, call [`~StableDiffusionPipeline.enable_vae_tiling`] in your pipeline before inference.
50+
To use tiled VAE processing, call [`~StableDiffusionPipeline.enable_vae_tiling`] on your pipeline before inference:
4851

4952
```python
5053
import torch
@@ -64,7 +67,7 @@ pipe.enable_xformers_memory_efficient_attention()
6467
image = pipe([prompt], width=3840, height=2224, num_inference_steps=20).images[0]
6568
```
6669

67-
The output image will have some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn't see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller.
70+
The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn't see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller.
6871

6972
## CPU offloading
7073

@@ -87,11 +90,11 @@ pipe.enable_sequential_cpu_offload()
8790
image = pipe(prompt).images[0]
8891
```
8992

90-
CPU offloading works at the submodule level, and not on whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as `num_inference_steps`); each time, the different submodules of the UNet are sequentially onloaded and then offloaded as they are needed, resulting in a large number of memory transfers.
93+
CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as `num_inference_steps`); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers.
9194

9295
<Tip>
9396

94-
Consider using [model offloading](#model-offloading) if you need more speed because it is much faster, but the memory savings won't be as large.
97+
Consider using [model offloading](#model-offloading) if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won't be as large.
9598

9699
</Tip>
97100

@@ -129,7 +132,7 @@ Model offloading requires 🤗 Accelerate version 0.17.0 or higher.
129132

130133
</Tip>
131134

132-
[Sequential CPU offloading](#cpu-offloading) preserves a lot of memory but it makes inference slower, because submodules are moved to GPU as needed, and they're immediately returned to the CPU when a new module runs.
135+
[Sequential CPU offloading](#cpu-offloading) preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they're immediately returned to the CPU when a new module runs.
133136

134137
Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model's constituent *submodules*. There is a negligible impact on inference time (compared with moving the pipeline to `cuda`), and it still provides some memory savings.
135138

@@ -173,7 +176,7 @@ image = pipe(prompt).images[0]
173176

174177
<Tip warning={true}>
175178

176-
In order to properly offload models after they're called, it is required that the entire pipeline is run and models are called in the pipeline's expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See [Removing Hooks](https://huggingface.co/docs/accelerate/en/package_reference/big_modeling#accelerate.hooks.remove_hook_from_module)
179+
In order to properly offload models after they're called, it is required to run the entire pipeline and models are called in the pipeline's expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See [Removing Hooks](https://huggingface.co/docs/accelerate/en/package_reference/big_modeling#accelerate.hooks.remove_hook_from_module)
177180
for more information.
178181

179182
[`~StableDiffusionPipeline.enable_model_cpu_offload`] is a stateful operation that installs hooks on the models and state on the pipeline.
@@ -182,9 +185,9 @@ for more information.
182185

183186
## Channels-last memory format
184187

185-
The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance. But you should still try and see if it works for your model!
188+
The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model.
186189

187-
For example, in order to set the UNet in the pipeline to use the channels-last format:
190+
For example, to set the pipeline's UNet to use the channels-last format:
188191

189192
```python
190193
print(pipe.unet.conv_out.state_dict()["weight"].stride()) # (2880, 9, 3, 1)
@@ -196,7 +199,7 @@ print(
196199

197200
## Tracing
198201

199-
Tracing runs an example input tensor through your model and captures the operations that are performed on it as that input makes its way through the model's layers. The executable or `ScriptFunction` that is returned is optimized with just-in-time compilation.
202+
Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model's layers. The executable or `ScriptFunction` that is returned is optimized with just-in-time compilation.
200203

201204
To trace a UNet:
202205

@@ -314,21 +317,21 @@ with torch.inference_mode():
314317
image = pipe([prompt] * 1, num_inference_steps=50).images[0]
315318
```
316319

317-
## Memory efficient attention
320+
## Memory-efficient attention
318321

319-
Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and gains in GPU memory usage. The most recent type of memory efficient attention is [Flash Attention](https://arxiv.org/pdf/2205.14135.pdf) (you can check out the original code at [HazyResearch/flash-attention](https://github.com/HazyResearch/flash-attention)).
322+
Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is [Flash Attention](https://arxiv.org/pdf/2205.14135.pdf) (you can check out the original code at [HazyResearch/flash-attention](https://github.com/HazyResearch/flash-attention)).
320323

321324
The table below details the speed-ups from a few different Nvidia GPUs when running inference on image sizes of 512x512 and a batch size of 1 (one prompt):
322325

323-
| GPU | Base Attention FP16 | Memory Efficient Attention FP16 |
324-
|------------------ |--------------------- |--------------------------------- |
325-
| NVIDIA Tesla T4 | 3.5it/s | 5.5it/s |
326-
| NVIDIA 3060 RTX | 4.6it/s | 7.8it/s |
327-
| NVIDIA A10G | 8.88it/s | 15.6it/s |
328-
| NVIDIA RTX A6000 | 11.7it/s | 21.09it/s |
329-
| NVIDIA TITAN RTX | 12.51it/s | 18.22it/s |
330-
| A100-SXM4-40GB | 18.6it/s | 29.it/s |
331-
| A100-SXM-80GB | 18.7it/s | 29.5it/s |
326+
| GPU | base attention (fp16) | memory-efficient attention (fp16) |
327+
|------------------|-----------------------|-----------------------------------|
328+
| NVIDIA Tesla T4 | 3.5it/s | 5.5it/s |
329+
| NVIDIA 3060 RTX | 4.6it/s | 7.8it/s |
330+
| NVIDIA A10G | 8.88it/s | 15.6it/s |
331+
| NVIDIA RTX A6000 | 11.7it/s | 21.09it/s |
332+
| NVIDIA TITAN RTX | 12.51it/s | 18.22it/s |
333+
| A100-SXM4-40GB | 18.6it/s | 29.it/s |
334+
| A100-SXM-80GB | 18.7it/s | 29.5it/s |
332335

333336
To use Flash Attention, install the following:
334337

@@ -342,6 +345,8 @@ If you have PyTorch 2.0 installed, you shouldn't use xFormers!
342345
- CUDA available
343346
- [xFormers](xformers)
344347

348+
Then call [`~ModelMixin.enable_xformers_memory_efficient_attention`] on the pipeline:
349+
345350
```python
346351
from diffusers import DiffusionPipeline
347352
import torch

0 commit comments

Comments
 (0)