You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/en/optimization/fp16.md
+12-6Lines changed: 12 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,9 +12,15 @@ specific language governing permissions and limitations under the License.
12
12
13
13
# Speed up inference
14
14
15
-
We present some techniques and ideas to optimize 🤗 Diffusers for inference speed. As a general rule, we recommend the use of [xFormers](https://github.com/facebookresearch/xformers)for memory efficient attention, please see the recommended [installation instructions](xformers).
15
+
There are several ways to optimize 🤗 Diffusers for inference speed. As a general rule of thumb, we recommend using either [xFormers](xformers)or `torch.nn.functional.scaled_dot_product_attention` in PyTorch 2.0 for their memory-efficient attention.
16
16
17
-
We'll discuss how the following settings impact performance and memory. The results below are obtained from generating a single 512x512 image on the prompt `a photo of an astronaut riding a horse on mars` with 50 DDIM steps on a Nvidia Titan RTX.
17
+
<Tip>
18
+
19
+
In many cases, optimizing for speed or memory leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on inference speed, but you can learn more about preserving memory in the [Reduce memory usage](memory) guide.
20
+
21
+
</Tip>
22
+
23
+
The results below are obtained from generating a single 512x512 image from the prompt `a photo of an astronaut riding a horse on mars` with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed up you can expect.
18
24
19
25
|| Latency | Speedup |
20
26
| ---------------- | ------- | ------- |
@@ -24,21 +30,21 @@ We'll discuss how the following settings impact performance and memory. The resu
24
30
| traced UNet | 3.21s | x2.96 |
25
31
| memory efficient attention | 2.63s | x3.61 |
26
32
27
-
## Use tf32 instead of fp32
33
+
## Use TensorFloat-32
28
34
29
-
On Ampere and later CUDA devices, matrix multiplications and convolutions can use the [TensorFloat-32 (TF32)](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling this setting for matrix multiplications. It can significantly speed up computations with typically negligible loss in numerical accuracy. Enable TF32 by:
35
+
On Ampere and later CUDA devices, matrix multiplications and convolutions can use the [TensorFloat-32 (TF32)](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) mode for faster, but slightly less accurate computations. By default, PyTorch enables TF32 mode for convolutions but not matrix multiplications. Unless your network requires full float32 precision, we recommend enabling TF32 for matrix multiplications. It can significantly speed up computations with typically negligible loss in numerical accuracy.
30
36
31
37
```python
32
38
import torch
33
39
34
40
torch.backends.cuda.matmul.allow_tf32 =True
35
41
```
36
42
37
-
Learn more about TF32 in [Mixed precision training](https://huggingface.co/docs/transformers/en/perf_train_gpu_one#tf32).
43
+
You can learn more about TF32 in the [Mixed precision training](https://huggingface.co/docs/transformers/en/perf_train_gpu_one#tf32) guide.
38
44
39
45
## Half-precision weights
40
46
41
-
To save GPU memory and get more speed, you can load and run the model weights directly in half-precision. This involves loading the float16 version of the weights:
47
+
To save GPU memory and get more speed, try loading and running the model weights directly in half-precision or float16:
Copy file name to clipboardExpand all lines: docs/source/en/optimization/memory.md
+36-31Lines changed: 36 additions & 31 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,26 +1,31 @@
1
1
# Reduce memory usage
2
2
3
-
A major challenge in using diffusion models is the large amount of memory required. To overcome this barrier, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier Colabs or consumer GPUs. Some of these techniques can even be combined together to further reduce memory usage!
3
+
A barrier to using diffusion models is the large amount of memory required. To overcome this challenge, there are several memory-reducing techniques you can use to run even some of the largest models on free-tier or consumer GPUs. Some of these techniques can even be combined to further reduce memory usage.
4
4
5
+
<Tip>
6
+
7
+
In many cases, optimizing for memory or speed leads to improved performance in the other, so you should try to optimize for both whenever you can. This guide focuses on minimizing memory usage, but you can also learn more about how to [Speed up inference](fp16).
8
+
9
+
</Tip>
10
+
11
+
The results below are obtained from generating a single 512x512 image from the prompt a photo of an astronaut riding a horse on mars with 50 DDIM steps on a Nvidia Titan RTX, demonstrating the speed-up you can expect as a result of reduced memory consumption.
5
12
6
-
||Latency|Speedup|
13
+
||latency|speed-up|
7
14
| ---------------- | ------- | ------- |
8
15
| original | 9.50s | x1 |
9
16
| fp16 | 3.61s | x2.63 |
10
17
| channels last | 3.30s | x2.88 |
11
18
| traced UNet | 3.21s | x2.96 |
12
-
| memoryefficient attention | 2.63s | x3.61 |
19
+
| memory-efficient attention | 2.63s | x3.61 |
13
20
14
21
15
22
## Sliced VAE
16
23
17
-
To decode large batches of images with limited VRAM, or to enable batches with 32 images or more, you can use sliced VAE to decode the batches of latents one image at a time.
24
+
Sliced VAE enables decoding large batches of images with limited VRAM or batches with 32 images or more by decoding the batches of latents one image at a time. You'll likely want to couple this with [`~ModelMixin.enable_xformers_memory_efficient_attention`] to further reduce memory use.
18
25
19
-
You likely want to couple this with [`~ModelMixin.enable_xformers_memory_efficient_attention`]to further reduce memory use.
26
+
To use sliced VAE, call [`~StableDiffusionPipeline.enable_vae_slicing`]on your pipeline before inference:
20
27
21
-
To use sliced VAE to decode one image at a time, call [`~StableDiffusionPipeline.enable_vae_slicing`] in your pipeline before inference
22
-
23
-
```Python
28
+
```python
24
29
import torch
25
30
from diffusers import StableDiffusionPipeline
26
31
@@ -40,11 +45,9 @@ You may see a small performance boost in VAE decoding on multi-image batches, an
40
45
41
46
## Tiled VAE
42
47
43
-
Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images in 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image.
48
+
Tiled VAE processing also enables working with large images on limited VRAM (for example, generating 4k images on 8GB of VRAM) by splitting the image into overlapping tiles, decoding the tiles, and then blending the outputs together to compose the final image. You should also used tiled VAE with [`~ModelMixin.enable_xformers_memory_efficient_attention`] to further reduce memory use.
44
49
45
-
You should also used tiled VAE with [`~ModelMixin.enable_xformers_memory_efficient_attention`] to further reduce memory use.
46
-
47
-
To use tiled VAE processing, call [`~StableDiffusionPipeline.enable_vae_tiling`] in your pipeline before inference.
50
+
To use tiled VAE processing, call [`~StableDiffusionPipeline.enable_vae_tiling`] on your pipeline before inference:
The output image will have some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn't see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller.
70
+
The output image has some tile-to-tile tone variation because the tiles are decoded separately, but you shouldn't see any sharp and obvious seams between the tiles. Tiling is turned off for images that are 512x512 or smaller.
CPU offloading works at the submodule level, and not on whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as `num_inference_steps`); each time, the different submodules of the UNet are sequentially onloaded and then offloaded as they are needed, resulting in a large number of memory transfers.
93
+
CPU offloading works on submodules rather than whole models. This is the best way to minimize memory consumption, but inference is much slower due to the iterative nature of the diffusion process. The UNet component of the pipeline runs several times (as many as `num_inference_steps`); each time, the different UNet submodules are sequentially onloaded and offloaded as needed, resulting in a large number of memory transfers.
91
94
92
95
<Tip>
93
96
94
-
Consider using [model offloading](#model-offloading) if you need more speed because it is much faster, but the memory savings won't be as large.
97
+
Consider using [model offloading](#model-offloading) if you want to optimize for speed because it is much faster. The tradeoff is your memory savings won't be as large.
95
98
96
99
</Tip>
97
100
@@ -129,7 +132,7 @@ Model offloading requires 🤗 Accelerate version 0.17.0 or higher.
129
132
130
133
</Tip>
131
134
132
-
[Sequential CPU offloading](#cpu-offloading) preserves a lot of memory but it makes inference slower, because submodules are moved to GPU as needed, and they're immediately returned to the CPU when a new module runs.
135
+
[Sequential CPU offloading](#cpu-offloading) preserves a lot of memory but it makes inference slower because submodules are moved to GPU as needed, and they're immediately returned to the CPU when a new module runs.
133
136
134
137
Full-model offloading is an alternative that moves whole models to the GPU, instead of handling each model's constituent *submodules*. There is a negligible impact on inference time (compared with moving the pipeline to `cuda`), and it still provides some memory savings.
In order to properly offload models after they're called, it is required that the entire pipeline is run and models are called in the pipeline's expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See [Removing Hooks](https://huggingface.co/docs/accelerate/en/package_reference/big_modeling#accelerate.hooks.remove_hook_from_module)
179
+
In order to properly offload models after they're called, it is required to run the entire pipeline and models are called in the pipeline's expected order. Exercise caution if models are reused outside the context of the pipeline after hooks have been installed. See [Removing Hooks](https://huggingface.co/docs/accelerate/en/package_reference/big_modeling#accelerate.hooks.remove_hook_from_module)
177
180
for more information.
178
181
179
182
[`~StableDiffusionPipeline.enable_model_cpu_offload`] is a stateful operation that installs hooks on the models and state on the pipeline.
@@ -182,9 +185,9 @@ for more information.
182
185
183
186
## Channels-last memory format
184
187
185
-
The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance. But you should still try and see if it works for your model!
188
+
The channels-last memory format is an alternative way of ordering NCHW tensors in memory to preserve dimension ordering. Channels-last tensors are ordered in such a way that the channels become the densest dimension (storing images pixel-per-pixel). Since not all operators currently support the channels-last format, it may result in worst performance but you should still try and see if it works for your model.
186
189
187
-
For example, in order to set the UNet in the pipeline to use the channels-last format:
190
+
For example, to set the pipeline's UNet to use the channels-last format:
Tracing runs an example input tensor through your model and captures the operations that are performed on it as that input makes its way through the model's layers. The executable or `ScriptFunction` that is returned is optimized with just-in-time compilation.
202
+
Tracing runs an example input tensor through the model and captures the operations that are performed on it as that input makes its way through the model's layers. The executable or `ScriptFunction` that is returned is optimized with just-in-time compilation.
200
203
201
204
To trace a UNet:
202
205
@@ -314,21 +317,21 @@ with torch.inference_mode():
Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and gains in GPU memory usage. The most recent type of memoryefficient attention is [Flash Attention](https://arxiv.org/pdf/2205.14135.pdf) (you can check out the original code at [HazyResearch/flash-attention](https://github.com/HazyResearch/flash-attention)).
322
+
Recent work on optimizing bandwidth in the attention block has generated huge speed-ups and reductions in GPU memory usage. The most recent type of memory-efficient attention is [Flash Attention](https://arxiv.org/pdf/2205.14135.pdf) (you can check out the original code at [HazyResearch/flash-attention](https://github.com/HazyResearch/flash-attention)).
320
323
321
324
The table below details the speed-ups from a few different Nvidia GPUs when running inference on image sizes of 512x512 and a batch size of 1 (one prompt):
0 commit comments