Skip to content

Commit 6b185b6

Browse files
pcuencaanton-l
andauthored
Update training and fine-tuning docs (#1020)
* Update training and fine-tuning docs. * Update examples README. * Update README. * Add Flax fine-tuning section. * Accept suggestion Co-authored-by: Anton Lozhkov <[email protected]> * Accept suggestion Co-authored-by: Anton Lozhkov <[email protected]> Co-authored-by: Anton Lozhkov <[email protected]>
1 parent 81b6fbf commit 6b185b6

File tree

8 files changed

+402
-15
lines changed

8 files changed

+402
-15
lines changed

README.md

Lines changed: 21 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -182,9 +182,9 @@ image.save("astronaut_rides_horse.png")
182182

183183
### JAX/Flax
184184

185-
To use StableDiffusion on TPUs and GPUs for faster inference you can leverage JAX/Flax.
185+
Diffusers offers a JAX / Flax implementation of Stable Diffusion for very fast inference. JAX shines specially on TPU hardware because each TPU server has 8 accelerators working in parallel, but it runs great on GPUs too.
186186

187-
Running the pipeline with default PNDMScheduler
187+
Running the pipeline with the default PNDMScheduler:
188188

189189
```python
190190
import jax
@@ -331,8 +331,25 @@ You can generate your own latents to reproduce results, or tweak your prompt on
331331

332332
For more details, check out [the Stable Diffusion notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb)
333333
and have a look into the [release notes](https://github.com/huggingface/diffusers/releases/tag/v0.2.0).
334-
335-
## Examples
334+
335+
## Fine-Tuning Stable Diffusion
336+
337+
Fine-tuning techniques make it possible to adapt Stable Diffusion to your own dataset, or add new subjects to it. These are some of the techniques supported in `diffusers`:
338+
339+
Textual Inversion is a technique for capturing novel concepts from a small number of example images in a way that can later be used to control text-to-image pipelines. It does so by learning new 'words' in the embedding space of the pipeline's text encoder. These special words can then be used within text prompts to achieve very fine-grained control of the resulting images.
340+
341+
- Textual Inversion. Capture novel concepts from a small set of sample images, and associate them with new "words" in the embedding space of the text encoder. Please, refer to [our training examples](https://github.com/huggingface/diffusers/tree/main/examples/textual_inversion) or [documentation](https://huggingface.co/docs/diffusers/training/text_inversion) to try for yourself.
342+
343+
- Dreambooth. Another technique to capture new concepts in Stable Diffusion. This method fine-tunes the UNet (and, optionally, also the text encoder) of the pipeline to achieve impressive results. Please, refer to [our training examples](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) and [training report](https://wandb.ai/psuraj/dreambooth/reports/Dreambooth-Training-Analysis--VmlldzoyNzk0NDc3) for additional details and training recommendations.
344+
345+
- Full Stable Diffusion fine-tuning. If you have a more sizable dataset with a specific look or style, you can fine-tune Stable Diffusion so that it outputs images following those examples. This was the approach taken to create [a Pokémon Stable Diffusion model](https://huggingface.co/justinpinkney/pokemon-stable-diffusion) (by Justing Pinkney / Lambda Labs), [a Japanese specific version of Stable Diffusion](https://huggingface.co/spaces/rinna/japanese-stable-diffusion) (by [Rinna Co.](https://github.com/rinnakk/japanese-stable-diffusion/) and others. You can start at [our text-to-image fine-tuning example](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) and go from there.
346+
347+
348+
## Stable Diffusion Community Pipelines
349+
350+
The release of Stable Diffusion as an open source model has fostered a lot of interesting ideas and experimentation. Our [Community Examples folder](https://github.com/huggingface/diffusers/tree/main/examples/community) contains many ideas worth exploring, like interpolating to create animated videos, using CLIP Guidance for additional prompt fidelity, term weighting, and much more! Take a look and [contribute your own](https://huggingface.co/docs/diffusers/using-diffusers/custom_pipelines).
351+
352+
## Other Examples
336353

337354
There are many ways to try running Diffusers! Here we outline code-focused tools (primarily using `DiffusionPipeline`s and Google Colab) and interactive web-tools.
338355

docs/source/_toctree.yml

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,9 +46,11 @@
4646
- local: training/unconditional_training
4747
title: "Unconditional Image Generation"
4848
- local: training/text_inversion
49-
title: "Text Inversion"
49+
title: "Textual Inversion"
50+
- local: training/dreambooth
51+
title: "Dreambooth"
5052
- local: training/text2image
51-
title: "Text-to-image"
53+
title: "Text-to-image fine-tuning"
5254
title: "Training"
5355
- sections:
5456
- local: conceptual/stable_diffusion
Lines changed: 240 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,240 @@
1+
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# DreamBooth fine-tuning example
14+
15+
[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text-to-image models like stable diffusion given just a few (3~5) images of a subject.
16+
17+
![Dreambooth examples from the project's blog](https://dreambooth.github.io/DreamBooth_files/teaser_static.jpg)
18+
_Dreambooth examples from the [project's blog](https://dreambooth.github.io)._
19+
20+
The [Dreambooth training script](https://github.com/huggingface/diffusers/tree/main/examples/dreambooth) shows how to implement this training procedure on a pre-trained Stable Diffusion model.
21+
22+
<Tip warning={true}>
23+
24+
<!-- TODO: replace with our blog when it's done -->
25+
26+
Dreambooth fine-tuning is very sensitive to hyperparameters and easy to overfit. We recommend you take a look at our [in-depth analysis](https://wandb.ai/psuraj/dreambooth/reports/Dreambooth-Training-Analysis--VmlldzoyNzk0NDc3) with recommended settings for different subjects, and go from there.
27+
28+
</Tip>
29+
30+
## Training locally
31+
32+
### Installing the dependencies
33+
34+
Before running the scripts, make sure to install the library's training dependencies. We also recommend to install `diffusers` from the `main` github branch.
35+
36+
```bash
37+
pip install git+https://github.com/huggingface/diffusers
38+
pip install -U -r diffusers/examples/dreambooth/requirements.txt
39+
```
40+
41+
Then initialize and configure a [🤗 Accelerate](https://github.com/huggingface/accelerate/) environment with:
42+
43+
```bash
44+
accelerate config
45+
```
46+
47+
You need to accept the model license before downloading or using the weights. In this example we'll use model version `v1-4`, so you'll need to visit [its card](https://huggingface.co/CompVis/stable-diffusion-v1-4), read the license and tick the checkbox if you agree.
48+
49+
You have to be a registered user in 🤗 Hugging Face Hub, and you'll also need to use an access token for the code to work. For more information on access tokens, please refer to [this section of the documentation](https://huggingface.co/docs/hub/security-tokens).
50+
51+
Run the following command to authenticate your token
52+
53+
```bash
54+
huggingface-cli login
55+
```
56+
57+
If you have already cloned the repo, then you won't need to go through these steps. Instead, you can pass the path to your local checkout to the training script and it will be loaded from there.
58+
59+
### Dog toy example
60+
61+
In this example we'll use [these images](https://drive.google.com/drive/folders/1BO_dyz-p65qhBRRMRA4TbZ8qW4rB99JZ) to add a new concept to Stable Diffusion using the Dreambooth process. They will be our training data. Please, download them and place them somewhere in your system.
62+
63+
Then you can launch the training script using:
64+
65+
```bash
66+
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
67+
export INSTANCE_DIR="path_to_training_images"
68+
export OUTPUT_DIR="path_to_saved_model"
69+
70+
accelerate launch train_dreambooth.py \
71+
--pretrained_model_name_or_path=$MODEL_NAME \
72+
--instance_data_dir=$INSTANCE_DIR \
73+
--output_dir=$OUTPUT_DIR \
74+
--instance_prompt="a photo of sks dog" \
75+
--resolution=512 \
76+
--train_batch_size=1 \
77+
--gradient_accumulation_steps=1 \
78+
--learning_rate=5e-6 \
79+
--lr_scheduler="constant" \
80+
--lr_warmup_steps=0 \
81+
--max_train_steps=400
82+
```
83+
84+
### Training with a prior-preserving loss
85+
86+
Prior preservation is used to avoid overfitting and language-drift. Please, refer to the paper to learn more about it if you are interested. For prior preservation, we use other images of the same class as part of the training process. The nice thing is that we can generate those images using the Stable Diffusion model itself! The training script will save the generated images to a local path we specify.
87+
88+
According to the paper, it's recommended to generate `num_epochs * num_samples` images for prior preservation. 200-300 works well for most cases.
89+
90+
```bash
91+
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
92+
export INSTANCE_DIR="path_to_training_images"
93+
export CLASS_DIR="path_to_class_images"
94+
export OUTPUT_DIR="path_to_saved_model"
95+
96+
accelerate launch train_dreambooth.py \
97+
--pretrained_model_name_or_path=$MODEL_NAME \
98+
--instance_data_dir=$INSTANCE_DIR \
99+
--class_data_dir=$CLASS_DIR \
100+
--output_dir=$OUTPUT_DIR \
101+
--with_prior_preservation --prior_loss_weight=1.0 \
102+
--instance_prompt="a photo of sks dog" \
103+
--class_prompt="a photo of dog" \
104+
--resolution=512 \
105+
--train_batch_size=1 \
106+
--gradient_accumulation_steps=1 \
107+
--learning_rate=5e-6 \
108+
--lr_scheduler="constant" \
109+
--lr_warmup_steps=0 \
110+
--num_class_images=200 \
111+
--max_train_steps=800
112+
```
113+
114+
### Training on a 16GB GPU
115+
116+
With the help of gradient checkpointing and the 8-bit optimizer from [bitsandbytes](https://github.com/TimDettmers/bitsandbytes), it's possible to train dreambooth on a 16GB GPU.
117+
118+
```bash
119+
pip install bitsandbytes
120+
```
121+
122+
Then pass the `--use_8bit_adam` option to the training script.
123+
124+
```bash
125+
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
126+
export INSTANCE_DIR="path_to_training_images"
127+
export CLASS_DIR="path_to_class_images"
128+
export OUTPUT_DIR="path_to_saved_model"
129+
130+
accelerate launch train_dreambooth.py \
131+
--pretrained_model_name_or_path=$MODEL_NAME \
132+
--instance_data_dir=$INSTANCE_DIR \
133+
--class_data_dir=$CLASS_DIR \
134+
--output_dir=$OUTPUT_DIR \
135+
--with_prior_preservation --prior_loss_weight=1.0 \
136+
--instance_prompt="a photo of sks dog" \
137+
--class_prompt="a photo of dog" \
138+
--resolution=512 \
139+
--train_batch_size=1 \
140+
--gradient_accumulation_steps=2 --gradient_checkpointing \
141+
--use_8bit_adam \
142+
--learning_rate=5e-6 \
143+
--lr_scheduler="constant" \
144+
--lr_warmup_steps=0 \
145+
--num_class_images=200 \
146+
--max_train_steps=800
147+
```
148+
149+
### Fine-tune the text encoder in addition to the UNet
150+
151+
The script also allows to fine-tune the `text_encoder` along with the `unet`. It has been observed experimentally that this gives much better results, especially on faces. Please, refer to [our report](https://wandb.ai/psuraj/dreambooth/reports/Dreambooth-Training-Analysis--VmlldzoyNzk0NDc3) for more details.
152+
153+
To enable this option, pass the `--train_text_encoder` argument to the training script.
154+
155+
<Tip>
156+
Training the text encoder requires additional memory, so training won't fit on a 16GB GPU. You'll need at least 24GB VRAM to use this option.
157+
</Tip>
158+
159+
```bash
160+
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
161+
export INSTANCE_DIR="path_to_training_images"
162+
export CLASS_DIR="path_to_class_images"
163+
export OUTPUT_DIR="path_to_saved_model"
164+
165+
accelerate launch train_dreambooth.py \
166+
--pretrained_model_name_or_path=$MODEL_NAME \
167+
--train_text_encoder \
168+
--instance_data_dir=$INSTANCE_DIR \
169+
--class_data_dir=$CLASS_DIR \
170+
--output_dir=$OUTPUT_DIR \
171+
--with_prior_preservation --prior_loss_weight=1.0 \
172+
--instance_prompt="a photo of sks dog" \
173+
--class_prompt="a photo of dog" \
174+
--resolution=512 \
175+
--train_batch_size=1 \
176+
--use_8bit_adam
177+
--gradient_checkpointing \
178+
--learning_rate=2e-6 \
179+
--lr_scheduler="constant" \
180+
--lr_warmup_steps=0 \
181+
--num_class_images=200 \
182+
--max_train_steps=800
183+
```
184+
185+
### Training on a 8 GB GPU:
186+
187+
Using [DeepSpeed](https://www.deepspeed.ai/) it's even possible to offload some
188+
tensors from VRAM to either CPU or NVME, allowing training to proceed with less GPU memory.
189+
190+
DeepSpeed needs to be enabled with `accelerate config`. During configuration,
191+
answer yes to "Do you want to use DeepSpeed?". Combining DeepSpeed stage 2, fp16
192+
mixed precision, and offloading both the model parameters and the optimizer state to CPU, it's
193+
possible to train on under 8 GB VRAM. The drawback is that this requires more system RAM (about 25 GB). See [the DeepSpeed documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more configuration options.
194+
195+
Changing the default Adam optimizer to DeepSpeed's special version of Adam
196+
`deepspeed.ops.adam.DeepSpeedCPUAdam` gives a substantial speedup, but enabling
197+
it requires the system's CUDA toolchain version to be the same as the one installed with PyTorch. 8-bit optimizers don't seem to be compatible with DeepSpeed at the moment.
198+
199+
```bash
200+
export MODEL_NAME="CompVis/stable-diffusion-v1-4"
201+
export INSTANCE_DIR="path_to_training_images"
202+
export CLASS_DIR="path_to_class_images"
203+
export OUTPUT_DIR="path_to_saved_model"
204+
205+
accelerate launch train_dreambooth.py \
206+
--pretrained_model_name_or_path=$MODEL_NAME \
207+
--instance_data_dir=$INSTANCE_DIR \
208+
--class_data_dir=$CLASS_DIR \
209+
--output_dir=$OUTPUT_DIR \
210+
--with_prior_preservation --prior_loss_weight=1.0 \
211+
--instance_prompt="a photo of sks dog" \
212+
--class_prompt="a photo of dog" \
213+
--resolution=512 \
214+
--train_batch_size=1 \
215+
--sample_batch_size=1 \
216+
--gradient_accumulation_steps=1 --gradient_checkpointing \
217+
--learning_rate=5e-6 \
218+
--lr_scheduler="constant" \
219+
--lr_warmup_steps=0 \
220+
--num_class_images=200 \
221+
--max_train_steps=800 \
222+
--mixed_precision=fp16
223+
```
224+
225+
## Inference
226+
227+
Once you have trained a model, inference can be done using the `StableDiffusionPipeline`, by simply indicating the path where the model was saved. Make sure that your prompts include the special `identifier` used during training (`sks` in the previous examples).
228+
229+
```python
230+
from diffusers import StableDiffusionPipeline
231+
import torch
232+
233+
model_id = "path_to_saved_model"
234+
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
235+
236+
prompt = "A photo of sks dog in a bucket"
237+
image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0]
238+
239+
image.save("dog-bucket.png")
240+
```

docs/source/training/overview.mdx

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ specific language governing permissions and limitations under the License.
1212

1313
# 🧨 Diffusers Training Examples
1414

15-
Diffusers examples are a collection of scripts to demonstrate how to effectively use the `diffusers` library
15+
Diffusers training examples are a collection of scripts to demonstrate how to effectively use the `diffusers` library
1616
for a variety of use cases.
1717

1818
**Note**: If you are looking for **official** examples on how to use `diffusers` for inference,
@@ -36,13 +36,15 @@ Training examples show how to pretrain or fine-tune diffusion models for a varie
3636
- [Unconditional Training](./unconditional_training)
3737
- [Text-to-Image Training](./text2image)
3838
- [Text Inversion](./text_inversion)
39+
- [Dreambooth](./dreambooth)
3940

4041

4142
| Task | 🤗 Accelerate | 🤗 Datasets | Colab
4243
|---|---|:---:|:---:|
4344
| [**Unconditional Image Generation**](./unconditional_training) | | | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
44-
| [**Text-to-Image**](./text2image) | - | - |
45-
| [**Text-Inversion**](./text_inversion) | | | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb)
45+
| [**Text-to-Image fine-tuning**](./text2image) | | |
46+
| [**Textual Inversion**](./text_inversion) | | - | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb)
47+
| [**Dreambooth**](./dreambooth) | | - | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb)
4648

4749
## Community
4850

0 commit comments

Comments
 (0)