@@ -29,10 +29,11 @@ This model was contributed by the community contributor [HimariO](https://github
2929| Pipeline | Tasks | Demo
3030| ---| ---| :---:|
3131| [ StableDiffusionAdapterPipeline] ( https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_adapter.py ) | * Text-to-Image Generation with T2I-Adapter Conditioning* | -
32+ | [ StableDiffusionXLAdapterPipeline] ( https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_xl_adapter.py ) | * Text-to-Image Generation with T2I-Adapter Conditioning on StableDiffusion-XL* | -
3233
33- ## Usage example
34+ ## Usage example with the base model of StableDiffusion-1.4/1.5
3435
35- In the following we give a simple example of how to use a * T2IAdapter* checkpoint with Diffusers for inference.
36+ In the following we give a simple example of how to use a * T2IAdapter* checkpoint with Diffusers for inference based on StableDiffusion-1.4/1.5 .
3637All adapters use the same pipeline.
3738
3839 1 . Images are first converted into the appropriate * control image* format.
@@ -93,6 +94,62 @@ out_image = pipe(
9394
9495![ img] ( https://huggingface.co/datasets/diffusers/docs-images/resolve/main/t2i-adapter/color_output.png )
9596
97+ ## Usage example with the base model of StableDiffusion-XL
98+
99+ In the following we give a simple example of how to use a * T2IAdapter* checkpoint with Diffusers for inference based on StableDiffusion-XL.
100+ All adapters use the same pipeline.
101+
102+ 1 . Images are first downloaded into the appropriate * control image* format.
103+ 2 . The * control image* and * prompt* are passed to the [ ` StableDiffusionXLAdapterPipeline ` ] .
104+
105+ Let's have a look at a simple example using the [ Sketch Adapter] ( https://huggingface.co/Adapter/t2iadapter/tree/main/sketch_sdxl_1.0 ) .
106+
107+ ``` python
108+ from diffusers.utils import load_image
109+
110+ sketch_image = load_image(" https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png" ).convert(" L" )
111+ ```
112+
113+ ![ img] ( https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch.png )
114+
115+ Then, create the adapter pipeline
116+
117+ ``` py
118+ import torch
119+ from diffusers import (
120+ T2IAdapter,
121+ StableDiffusionXLAdapterPipeline,
122+ DDPMScheduler
123+ )
124+ from diffusers.models.unet_2d_condition import UNet2DConditionModel
125+
126+ model_id = " stabilityai/stable-diffusion-xl-base-1.0"
127+ adapter = T2IAdapter.from_pretrained(" Adapter/t2iadapter" , subfolder = " sketch_sdxl_1.0" ,torch_dtype = torch.float16, adapter_type = " full_adapter_xl" )
128+ scheduler = DDPMScheduler.from_pretrained(model_id, subfolder = " scheduler" )
129+
130+ pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
131+ model_id, adapter = adapter, safety_checker = None , torch_dtype = torch.float16, variant = " fp16" , scheduler = scheduler
132+ )
133+
134+ pipe.to(" cuda" )
135+ ```
136+
137+ Finally, pass the prompt and control image to the pipeline
138+
139+ ``` py
140+ # fix the random seed, so you will get the same result as the example
141+ generator = torch.Generator().manual_seed(42 )
142+
143+ sketch_image_out = pipe(
144+ prompt = " a photo of a dog in real world, high quality" ,
145+ negative_prompt = " extra digit, fewer digits, cropped, worst quality, low quality" ,
146+ image = sketch_image,
147+ generator = generator,
148+ guidance_scale = 7.5
149+ ).images[0 ]
150+ ```
151+
152+ ![ img] ( https://huggingface.co/Adapter/t2iadapter/resolve/main/sketch_output.png )
96153
97154## Available checkpoints
98155
@@ -113,6 +170,9 @@ Non-diffusers checkpoints can be found under [TencentARC/T2I-Adapter](https://hu
113170| [ TencentARC/t2iadapter_depth_sd15v2] ( https://huggingface.co/TencentARC/t2iadapter_depth_sd15v2 ) ||
114171| [ TencentARC/t2iadapter_sketch_sd15v2] ( https://huggingface.co/TencentARC/t2iadapter_sketch_sd15v2 ) ||
115172| [ TencentARC/t2iadapter_zoedepth_sd15v1] ( https://huggingface.co/TencentARC/t2iadapter_zoedepth_sd15v1 ) ||
173+ | [ Adapter/t2iadapter, subfolder='sketch_sdxl_1.0'] ( https://huggingface.co/Adapter/t2iadapter/tree/main/sketch_sdxl_1.0 ) ||
174+ | [ Adapter/t2iadapter, subfolder='canny_sdxl_1.0'] ( https://huggingface.co/Adapter/t2iadapter/tree/main/canny_sdxl_1.0 ) ||
175+ | [ Adapter/t2iadapter, subfolder='openpose_sdxl_1.0'] ( https://huggingface.co/Adapter/t2iadapter/tree/main/openpose_sdxl_1.0 ) ||
116176
117177## Combining multiple adapters
118178
@@ -185,3 +245,14 @@ However, T2I-Adapter performs slightly worse than ControlNet.
185245 - disable_vae_slicing
186246 - enable_xformers_memory_efficient_attention
187247 - disable_xformers_memory_efficient_attention
248+
249+ ## StableDiffusionXLAdapterPipeline
250+ [[ autodoc]] StableDiffusionXLAdapterPipeline
251+ - all
252+ - __ call__
253+ - enable_attention_slicing
254+ - disable_attention_slicing
255+ - enable_vae_slicing
256+ - disable_vae_slicing
257+ - enable_xformers_memory_efficient_attention
258+ - disable_xformers_memory_efficient_attention
0 commit comments