Skip to content

Commit 8dc93ae

Browse files
committed
Add a note about xformers in README
1 parent f83a023 commit 8dc93ae

File tree

3 files changed

+10
-1
lines changed

3 files changed

+10
-1
lines changed

examples/dreambooth/README.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -317,4 +317,7 @@ python train_dreambooth_flax.py \
317317
--max_train_steps=800
318318
```
319319

320-
You can also use Dreambooth to train the specialized in-painting model. See [the script in the research folder for details](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint).
320+
### Training with xformers:
321+
You can enable memory efficient attention by [installing xFormers](https://github.com/facebookresearch/xformers#installing-xformers) and padding the `--enable_xformers_memory_efficient_attention` argument to the script. This is not available with the Flax/JAX implementation.
322+
323+
You can also use Dreambooth to train the specialized in-painting model. See [the script in the research folder for details](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint).

examples/text_to_image/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -160,3 +160,6 @@ python train_text_to_image_flax.py \
160160
--max_grad_norm=1 \
161161
--output_dir="sd-pokemon-model"
162162
```
163+
164+
### Training with xformers:
165+
You can enable memory efficient attention by [installing xFormers](https://github.com/facebookresearch/xformers#installing-xformers) and padding the `--enable_xformers_memory_efficient_attention` argument to the script. This is not available with the Flax/JAX implementation.

examples/textual_inversion/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -124,3 +124,6 @@ python textual_inversion_flax.py \
124124
--output_dir="textual_inversion_cat"
125125
```
126126
It should be at least 70% faster than the PyTorch script with the same configuration.
127+
128+
### Training with xformers:
129+
You can enable memory efficient attention by [installing xFormers](https://github.com/facebookresearch/xformers#installing-xformers) and padding the `--enable_xformers_memory_efficient_attention` argument to the script. This is not available with the Flax/JAX implementation.

0 commit comments

Comments
 (0)