Skip to content

Commit f96b760

Browse files
authored
[docs] Fix Colab notebook cells (#3777)
fix colab notebook cells
1 parent 7761b89 commit f96b760

17 files changed

+76
-58
lines changed

docs/source/en/quicktour.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,8 +32,9 @@ The quicktour is a simplified version of the introductory 🧨 Diffusers [notebo
3232

3333
Before you begin, make sure you have all the necessary libraries installed:
3434

35-
```bash
36-
!pip install --upgrade diffusers accelerate transformers
35+
```py
36+
# uncomment to install the necessary libraries in Colab
37+
#!pip install --upgrade diffusers accelerate transformers
3738
```
3839

3940
- [🤗 Accelerate](https://huggingface.co/docs/accelerate/index) speeds up model loading for inference and training.

docs/source/en/training/dreambooth.mdx

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
1212

1313
# DreamBooth
1414

15-
[[open-in-colab]]
16-
1715
[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text-to-image models like Stable Diffusion given just a few (3-5) images of a subject. It allows the model to generate contextualized images of the subject in different scenes, poses, and views.
1816

1917
![Dreambooth examples from the project's blog](https://dreambooth.github.io/DreamBooth_files/teaser_static.jpg)

docs/source/en/training/lora.mdx

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,6 @@ specific language governing permissions and limitations under the License.
1212

1313
# Low-Rank Adaptation of Large Language Models (LoRA)
1414

15-
[[open-in-colab]]
16-
1715
<Tip warning={true}>
1816

1917
Currently, LoRA is only supported for the attention layers of the [`UNet2DConditionalModel`]. We also

docs/source/en/training/text_inversion.mdx

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,8 +14,6 @@ specific language governing permissions and limitations under the License.
1414

1515
# Textual Inversion
1616

17-
[[open-in-colab]]
18-
1917
[Textual Inversion](https://arxiv.org/abs/2208.01618) is a technique for capturing novel concepts from a small number of example images. While the technique was originally demonstrated with a [latent diffusion model](https://github.com/CompVis/latent-diffusion), it has since been applied to other model variants like [Stable Diffusion](https://huggingface.co/docs/diffusers/main/en/conceptual/stable_diffusion). The learned concepts can be used to better control the images generated from text-to-image pipelines. It learns new "words" in the text encoder's embedding space, which are used within text prompts for personalized image generation.
2018

2119
![Textual Inversion example](https://textual-inversion.github.io/static/images/editing/colorful_teapot.JPG)

docs/source/en/tutorials/basic_training.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,9 @@ This tutorial will teach you how to train a [`UNet2DModel`] from scratch on a su
2626

2727
Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install [TensorBoard](https://www.tensorflow.org/tensorboard) to visualize training metrics (you can also use [Weights & Biases](https://docs.wandb.ai/) to track your training).
2828

29-
```bash
30-
!pip install diffusers[training]
29+
```py
30+
# uncomment to install the necessary libraries in Colab
31+
#!pip install diffusers[training]
3132
```
3233

3334
We encourage you to share your model with the community, and in order to do that, you'll need to login to your Hugging Face account (create one [here](https://hf.co/join) if you don't already have one!). You can login from a notebook and enter your token when prompted:

docs/source/en/using-diffusers/custom_pipeline_examples.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.
1212

1313
# Community pipelines
1414

15+
[[open-in-colab]]
16+
1517
> **For more information about community pipelines, please have a look at [this issue](https://github.com/huggingface/diffusers/issues/841).**
1618

1719
**Community** examples consist of both inference and training examples that have been added by the community.

docs/source/en/using-diffusers/custom_pipeline_overview.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.
1212

1313
# Load community pipelines
1414

15+
[[open-in-colab]]
16+
1517
Community pipelines are any [`DiffusionPipeline`] class that are different from the original implementation as specified in their paper (for example, the [`StableDiffusionControlNetPipeline`] corresponds to the [Text-to-Image Generation with ControlNet Conditioning](https://arxiv.org/abs/2302.05543) paper). They provide additional functionality or extend the original implementation of a pipeline.
1618

1719
There are many cool community pipelines like [Speech to Image](https://github.com/huggingface/diffusers/tree/main/examples/community#speech-to-image) or [Composable Stable Diffusion](https://github.com/huggingface/diffusers/tree/main/examples/community#composable-stable-diffusion), and you can find all the official community pipelines [here](https://github.com/huggingface/diffusers/tree/main/examples/community).

docs/source/en/using-diffusers/img2img.mdx

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,8 +18,9 @@ The [`StableDiffusionImg2ImgPipeline`] lets you pass a text prompt and an initia
1818

1919
Before you begin, make sure you have all the necessary libraries installed:
2020

21-
```bash
22-
!pip install diffusers transformers ftfy accelerate
21+
```py
22+
# uncomment to install the necessary libraries in Colab
23+
#!pip install diffusers transformers ftfy accelerate
2324
```
2425

2526
Get started by creating a [`StableDiffusionImg2ImgPipeline`] with a pretrained Stable Diffusion model like [`nitrosocke/Ghibli-Diffusion`](https://huggingface.co/nitrosocke/Ghibli-Diffusion).

docs/source/en/using-diffusers/loading.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.
1212

1313
# Load pipelines, models, and schedulers
1414

15+
[[open-in-colab]]
16+
1517
Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the [`DiffusionPipeline`] to wrap the complexity of the entire diffusion system into an easy-to-use API, while remaining flexible enough to be adapted for other use cases, such as loading each component individually as building blocks to assemble your own diffusion system.
1618

1719
Everything you need for inference or training is accessible with the `from_pretrained()` method.

docs/source/en/using-diffusers/other-formats.mdx

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.
1212

1313
# Load different Stable Diffusion formats
1414

15+
[[open-in-colab]]
16+
1517
Stable Diffusion models are available in different formats depending on the framework they're trained and saved with, and where you download them from. Converting these formats for use in 🤗 Diffusers allows you to use all the features supported by the library, such as [using different schedulers](schedulers) for inference, [building your custom pipeline](write_own_pipeline), and a variety of techniques and methods for [optimizing inference speed](./optimization/opt_overview).
1618

1719
<Tip>
@@ -141,8 +143,9 @@ pipeline.scheduler = UniPCMultistepScheduler.from_config(pipeline.scheduler.conf
141143

142144
Download a LoRA checkpoint from Civitai; this example uses the [Howls Moving Castle,Interior/Scenery LoRA (Ghibli Stlye)](https://civitai.com/models/14605?modelVersionId=19998) checkpoint, but feel free to try out any LoRA checkpoint!
143145

144-
```bash
145-
!wget https://civitai.com/api/download/models/19998 -O howls_moving_castle.safetensors
146+
```py
147+
# uncomment to download the safetensor weights
148+
#!wget https://civitai.com/api/download/models/19998 -O howls_moving_castle.safetensors
146149
```
147150

148151
Load the LoRA checkpoint into the pipeline with the [`~loaders.LoraLoaderMixin.load_lora_weights`] method:

0 commit comments

Comments
 (0)