You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
prompt='best quality, high quality, wearing sunglasses',
343
+
prompt='best quality, high quality, wearing sunglasses',
344
344
ip_adapter_image=image,
345
-
negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality",
345
+
negative_prompt="monochrome, lowres, bad anatomy, worst quality, low quality",
346
346
num_inference_steps=50,
347
347
generator=generator,
348
348
).images[0]
@@ -355,11 +355,13 @@ images
355
355
356
356
### IP-Adapter Plus
357
357
358
-
IP-Adapter relies on an image encoder to generate image features. If the IP-Adapter repository contains a`image_encoder` subfolder, the image encoder is automatically loaded and registed to the pipeline. Otherwise, you'll need to explicitly load the image encoder with a [`~transformers.CLIPVisionModelWithProjection`] model and pass it to the pipeline.
358
+
IP-Adapter relies on an image encoder to generate image features. If the IP-Adapter repository contains an`image_encoder` subfolder, the image encoder is automatically loaded and registered to the pipeline. Otherwise, you'll need to explicitly load the image encoder with a [`~transformers.CLIPVisionModelWithProjection`] model and pass it to the pipeline.
359
359
360
360
This is the case for *IP-Adapter Plus* checkpoints which use the ViT-H image encoder.
361
361
362
362
```py
363
+
from transformers import CLIPVisionModelWithProjection
0 commit comments