From f8619d96ed79ce105573875422475cdd37bd5a32 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Tue, 28 Feb 2023 21:15:27 +0530 Subject: [PATCH 01/16] add a documentation page for evaluating diffuion models. --- docs/source/en/_toctree.yml | 2 + docs/source/en/conceptual/evaluation.mdx | 505 +++++++++++++++++++++++ 2 files changed, 507 insertions(+) create mode 100644 docs/source/en/conceptual/evaluation.mdx diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml index cfbdac08a3fb..dfdeae0e2197 100644 --- a/docs/source/en/_toctree.yml +++ b/docs/source/en/_toctree.yml @@ -91,6 +91,8 @@ title: How to contribute? - local: conceptual/ethical_guidelines title: Diffusers' Ethical Guidelines + - local: conceptual/evaluation + title: Evaluating Diffusion Models title: Conceptual Guides - sections: - sections: diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx new file mode 100644 index 000000000000..47eb2f1bad73 --- /dev/null +++ b/docs/source/en/conceptual/evaluation.mdx @@ -0,0 +1,505 @@ + + +# Evaluating Diffusion Models + +Evaluation of generative models like [Stable Diffusion](https://huggingface.co/docs/diffusers/stable_diffusion) is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? + +Qualitative evaluation of such models can often be error-prone and might incorrectly influence a decision. However, solely relying on quantitative metrics doesn't give a full picture either. For example, a generative model might provide a lower FID score, but the generated images might still lack quality. So, usually, a combination of both qualitative and quantitative evaluations provides a stronger signal when choosing one model over the other. + +In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside `diffusers`. + +The methods shown in this document can also be used to evaluate different [noise schedulers](https://huggingface.co/docs/diffusers/main/en/api/schedulers/overview) keeping the underlying generation model fixed. + +## Scenarios + +We cover Diffusion models with the following pipelines: + +- Text-guided image generation (such as the [`StableDiffusionPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/text2img)). +- Text-guided image generation, additionally conditioned on an input image (such as the [`StableDiffusionImg2ImgPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/img2img), and [`StableDiffusionInstructPix2PixPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix)). +- Class-conditioned image generation models (such as the [`DiTPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/dit)). + +## Qualitative + +Qualitative evaluation typically involves humans that assess the quality of the generated images across various aspects such as compositionality, image-text alignment, spatial relations, etc. To have a uniform ground to assess different models on such aspects, we need to start with some common prompts that allow for varied coverage. Two notable benchmarks in this area are DrawBench and PartiPrompts introduced by [Imagen](https://imagen.research.google/) and [Parti](https://parti.research.google/) respectively. + +From the [official Parti website](https://parti.research.google/): + +> PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects. + +![parti-prompts](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts.png) + +PartiPrompts has the following columns: + +- Prompt +- Category of the prompt (such as “Abstract”, “World Knowledge”, etc.) +- Challenge reflecting the difficulty (such as “Basic”, “Complex”, “Writing & Symbols”, etc.) + +These benchmarks allow for side-by-side human evaluation of different image generation models. Let’s see how we can use `diffusers` on a couple of PartiPrompts. + +Here are some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols: + +```python +from datasets import load_dataset + +prompts = load_dataset("nateraw/parti-prompts", split="train") +prompts = prompts.shuffle() +samples_prompts = [prompts[i]["Prompt"] for i in range(5)] + +samples_prompts = [ + "a corgi", + "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", + "a car with no windows", + "a cube made of porcupine", + 'The saying "BE EXCELLENT TO EACH OTHER" written on a red brick wall with a graffiti image of a green alien wearing a tuxedo. A yellow fire hydrant is on a sidewalk in the foreground.', +] +``` + +Now we can use these prompts to generate some images using Stable Diffusion ([v1-4 checkpoint](https://huggingface.co/CompVis/stable-diffusion-v1-4)): + +```python +import torch + +seed = 0 +generator = torch.Generator(device).manual_seed(seed) + +images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator, output_type="numpy").images +``` + +![parti-prompts-14](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts-14.png) + +We can also set `num_images_per_prompt` accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint ([v1-5](https://www.notion.so/Evaluating-Diffusion-Models-1bda120305cb43ddba2fcebaff5497fc)), yields: + +![parti-prompts-15](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts-15.png) + +Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For +more details on these benchmarks, refer to their respective papers. + +## Quantitative + +In this section, we focus on: + +- CLIP score +- Clip directional similarity +- FID + +## Text-guided image generation + +One commonly used metric here is the [CLIP score](https://arxiv.org/abs/2104.08718). It measures how well-aligned a pair of image and caption is. The higher the CLIP score, the better it is 🔼 + +Let's first load a `StableDiffusionPipeline`: + +```python +from diffusers import StableDiffusionPipeline +import torch + +model_ckpt = "CompVis/stable-diffusion-v1-4" +device = "cuda" +weight_dtype = torch.float16 +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=weight_dtype).to(device) +``` + +Generate some images with multiple prompts: + +```python +prompts = [ + "a photo of an astronaut riding a horse on mars", + "A high tech solarpunk utopia in the Amazon rainforest", + "A pikachu fine dining with a view to the Eiffel Tower", + "A mecha robot in a favela in expressionist style", + "an insect robot preparing a delicious meal", + "A small cabin on top of a snowy mountain in the style of Disney, artstation", +] + +images = sd_pipeline(prompts, num_images_per_prompt=1, output_type="numpy").images + +print(images.shape) +# (6, 512, 512, 3) +``` + +And then, we calculate the CLIP score. + +```python +from torchmetrics.functional.multimodal import clip_score +from functools import partial + +clip_score_fn = partial(clip_score, model_name_or_path="openai/clip-vit-base-patch16") + + +def calculate_clip_score(images, prompts): + images_int = (images * 255).astype("uint8") + clip_score = clip_score_fn(torch.from_numpy(images_int).permute(0, 3, 1, 2), prompts).detach() + return round(float(clip_score), 4) + + +sd_clip_score = calculate_clip_score(images, prompts) +print(f"CLIP score: {sd_clip_score}") +# CLIP score: 35.7038 +``` + +In the above example, we generated one image per prompt. If we generated multiple images per prompt, we could uniformly sample just one from the pool of generated images. + +Now, if we wanted to compare two checkpoints compatible with the `StableDiffusionPipeline` we should pass a generator while calling the pipeline: + +```python +seed = 0 +generator = torch.Generator(device).manual_seed(seed) + +images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="numpy").images +``` + +```python +model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" +sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) + +images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="numpy").images +``` + +```python +sd_clip_score_1_4 = calculate_clip_score(images, prompts) +print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") + +sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) +print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") +``` + +It seems like the [v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. + +## Image-conditioned text-to-image generation + +In this case, we condition the generation pipeline with an input image as well as a text prompt. Let's take the `[StableDiffusionInstructPix2PixPipeline](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix)`, as an example. It takes an edit instruction as an input prompt and an input image to be edited. + +Here is one example: + +![edit-instruction](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/edit-instruction.png) + +One strategy to evaluate such a model is to measure the consistency of the change between the two images (in [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) space) with the change between the two image captions (as shown in [CLIP-Guided Domain Adaptation of Image Generators](https://arxiv.org/abs/2108.00946)). This is referred to as the "**CLIP directional similarity**". + +- Caption 1 corresponds to the input image (image 1) that is to be edited. +- Caption 2 corresponds to the edited image (image 2). It should reflect the edit instruction. + +Following is a pictorial overview: + +![edit-consistency](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/edit-consistency.png) + +We have prepared a mini dataset to implement this metric. Let's first load the dataset. + +```python +from datasets import load_dataset + +dataset = load_dataset("sayakpaul/instructpix2pix-demo", split="train") +dataset.features +``` + +```bash +{'input': Value(dtype='string', id=None), + 'edit': Value(dtype='string', id=None), + 'output': Value(dtype='string', id=None), + 'image': Image(decode=True, id=None)} +``` + +Here we have: + +- `input` is a caption corresponding to the `image`. +- `edit` denotes the edit instruction. +- `output` denotes the modified caption reflecting the `edit` instruction. + +Let's take a look at a sample. + +```python +idx = 0 +print(f"Original caption: {dataset[idx]['input']}") +print(f"Edit instruction: {dataset[idx]['edit']}") +print(f"Modified caption: {dataset[idx]['output']}") +``` + +```bash +Original caption: 2. FAROE ISLANDS: An archipelago of 18 mountainous isles in the North Atlantic Ocean between Norway and Iceland, the Faroe Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +Edit instruction: make the isles all white marble +Modified caption: 2. WHITE MARBLE ISLANDS: An archipelago of 18 mountainous white marble isles in the North Atlantic Ocean between Norway and Iceland, the White Marble Islands has 'everything you could hope for', according to Big 7 Travel. It boasts 'crystal clear waterfalls, rocky cliffs that seem to jut out of nowhere and velvety green hills' +``` + +And here is the image: + +```python +dataset[idx]["image"] +``` + +![edit-dataset](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/edit-dataset.png) + +We will first edit the images of our dataset with the edit instruction and compute the directional similarity. + +Let's first load the `StableDiffusionInstructPix2PixPipeline`: + +```python +from diffusers import StableDiffusionInstructPix2PixPipeline + +instruct_pix2pix_model_id = "timbrooks/instruct-pix2pix" +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( + instruct_pix2pix_model_id, torch_dtype=weight_dtype +).to(device) +``` + +Now, we perform the edits: + +```python +import numpy as np + + +def edit_image(input_image, instruction): + image = instruct_pix2pix_pipeline( + instruction, + image=input_image, + output_type="numpy", + generator=generator, + ).images[0] + return image + + +input_images = [] +original_captions = [] +modified_captions = [] +edited_images = [] + +for idx in range(len(dataset)): + input_image = dataset[idx]["image"] + edit_instruction = dataset[idx]["edit"] + edited_image = edit_image(input_image, edit_instruction) + + input_images.append(np.array(input_image)) + original_captions.append(dataset[idx]["input"]) + modified_captions.append(dataset[idx]["output"]) + edited_images.append(edited_image) +``` + +To measure the directional similarity, we first load CLIP's image and text encoders. + +```python +from transformers import ( + CLIPTokenizer, + CLIPTextModelWithProjection, + CLIPVisionModelWithProjection, + CLIPImageProcessor, +) + +clip_id = "openai/clip-vit-large-patch14" +tokenizer = CLIPTokenizer.from_pretrained(clip_id) +text_encoder = CLIPTextModelWithProjection.from_pretrained(clip_id).to(device) +image_processor = CLIPImageProcessor.from_pretrained(clip_id) +image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device) +``` + +Notice that we are using a particular CLIP checkpoint, i.e., `openai/clip-vit-large-patch14`. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix#diffusers.StableDiffusionInstructPix2PixPipeline.text_encoder). + +Next, we prepare a PyTorch `nn.module` to compute directional similarity: + +```python +import torch.nn as nn +import torch.nn.functional as F + + +class DirectionalSimilarity(nn.Module): + def __init__(self, tokenizer, text_encoder, image_processor, image_encoder): + super().__init__() + self.tokenizer = tokenizer + self.text_encoder = text_encoder + self.image_processor = image_processor + self.image_encoder = image_encoder + + def preprocess_image(self, image): + image = self.image_processor(image, return_tensors="pt")["pixel_values"] + return {"pixel_values": image.to(device)} + + def tokenize_text(self, text): + inputs = self.tokenizer( + text, + max_length=self.tokenizer.model_max_length, + padding="max_length", + truncation=True, + return_tensors="pt", + ) + return {"input_ids": inputs.input_ids.to(device)} + + def encode_image(self, image): + preprocessed_image = self.preprocess_image(image) + image_features = self.image_encoder(**preprocessed_image).image_embeds + image_features = image_features / image_features.norm(dim=1, keepdim=True) + return image_features + + def encode_text(self, text): + tokenized_text = self.tokenize_text(text) + text_features = self.text_encoder(**tokenized_text).text_embeds + text_features = text_features / text_features.norm(dim=1, keepdim=True) + return text_features + + def compute_directional_similarity(self, img_feat_one, img_feat_two, text_feat_one, text_feat_two): + sim_direction = F.cosine_similarity(img_feat_two - img_feat_one, text_feat_two - text_feat_one) + return sim_direction + + def forward(self, image_one, image_two, caption_one, caption_two): + img_feat_one = self.encode_image(image_one) + img_feat_two = self.encode_image(image_two) + text_feat_one = self.encode_text(caption_one) + text_feat_two = self.encode_text(caption_two) + directional_similarity = self.compute_directional_similarity( + img_feat_one, img_feat_two, text_feat_one, text_feat_two + ) + return directional_similarity +``` + +Let's put `DirectionalSimilarity` to use now. + +```python +dir_similarity = DirectionalSimilarity(tokenizer, text_encoder, image_processor, image_encoder) +scores = [] + +for i in range(len(input_images)): + original_image = input_images[i] + original_caption = original_captions[i] + edited_image = edited_images[i] + modified_caption = modified_captions[i] + + similarity_score = dir_similarity(original_image, edited_image, original_caption, modified_caption) + scores.append(float(similarity_score.detach().cpu())) + +print(f"CLIP directional similarity: {np.mean(scores)}") +# CLIP directional similarity: 0.0797976553440094 +``` + +Like the CLIP Score, the higher the CLIP directional similarity, the better it is. + +It should be noted that the `StableDiffusionInstructPix2PixPipeline` exposes two arguments, namely, `image_guidance_scale` and `guidance_scale` that let you control the quality of the final edited image. We encourage you to experiment with these two arguments and see the impact of that on the directional similarity. + +We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do `F.cosine_similarity(img_feat_two, img_feat_one)`. For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. + +We can use these metrics for similar pipelines such as the`[StableDiffusionPix2PixZeroPipeline](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix_zero#diffusers.StableDiffusionPix2PixZeroPipeline)`. + +> Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. + +***Extending metrics like IS, FID, or KID can be difficult*** w***hen the model under evaluation was pre-trained on a large image-captioning dataset (such as the [LAION-5B dataset](https://laion.ai/blog/laion-5b/)). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction.*** + +***Using the above metrics helps evaluate models that are class-conditioned. For example, [DiT](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/overview). It was pre-trained being conditioned on the ImageNet-1k classes.*** + +## Class-conditioned image generation + +Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k). Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID ([Heusel et al.](https://arxiv.org/abs/1706.08500)). We show how to compute it with the `[DiTPipeline](https://huggingface.co/docs/diffusers/api/pipelines/dit)`, which uses the [DiT model](https://arxiv.org/abs/2212.09748) under the hood. + +FID aims to measure how similar are two datasets of images. As per [this resource](https://mmgeneration.readthedocs.io/en/latest/quick_run.html#fid): + +> Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network. + +These two datasets are essentially the dataset of real images and the dataset of fake images (generated images in our case). FID is usually calculated with two large datasets. However, for this document, we will work with two mini datasets. + +Let's first download a few images from the ImageNet-1k training set: + +```python +from zipfile import ZipFile +import requests + + +def download(url, local_filepath): + r = requests.get(url) + with open(local_filepath, "wb") as f: + f.write(r.content) + return local_filepath + + +dummy_dataset_url = "https://hf.co/datasets/sayakpaul/sample-datasets/resolve/main/sample-imagenet-images.zip" +local_filepath = download(dummy_dataset_url, dummy_dataset_url.split("/")[-1]) + +with ZipFile(local_filepath, "r") as zipper: + zipper.extractall(".") +``` + +```python +from PIL import Image +import os + +dataset_path = "sample-imagenet-images" +image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_path)]) + +real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] +``` + +These images are from the following Imagenet-1k classes: "cassette_player", "chain_saw", "church", "gas_pump", "parachute", and "tench". + +Now that the images are loaded, let's apply some lightweight pre-processing on them to use them for FID calculation. + +```python +from torchvision.transforms import functional as F + + +def preprocess_image(image): + image = torch.tensor(image).unsqueeze(0) + image = image.permute(0, 3, 1, 2) / 255.0 + return F.center_crop(image, (256, 256)) + + +real_images = torch.cat([preprocess_image(image) for image in real_images]) +print(real_images.shape) +# torch.Size([10, 3, 256, 256]) +``` + +We now load the `DiTPipeline` to generate images conditioned on the above-mentioned classes. + +```python +from diffusers import DiTPipeline, DPMSolverMultistepScheduler + +dit_pipeline = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16) +dit_pipeline.scheduler = DPMSolverMultistepScheduler.from_config(dit_pipeline.scheduler.config) +dit_pipeline = dit_pipeline.to("cuda") + +words = [ + "cassette player", + "chainsaw", + "chainsaw", + "church", + "gas pump", + "gas pump", + "gas pump", + "parachute", + "parachute", + "tench", +] + +class_ids = dit_pipeline.get_label_ids(words) +output = dit_pipeline(class_labels=class_ids, generator=generator, output_type="numpy") + +fake_images = output.images +fake_images = torch.tensor(fake_images) +fake_images = fake_images.permute(0, 3, 1, 2) +print(fake_images.shape) +# torch.Size([10, 3, 256, 256]) +``` + +Now, we can compute the FID using `[torchmetrics](https://torchmetrics.readthedocs.io/)`. + +The lower the FID, the better it is. Several things can influence FID here: + +- Number of images (both real and fake) +- Randomness induced in the diffusion process +- Number of inference steps in the diffusion process +- The scheduler being used in the diffusion process + +For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. + +As a final step, let's visually inspect the `fake_images` and `real_images`, respectively. + +

+ fake-images
+ Fake images. +

+ +

+ real-images
+ Fake images. +

From 9e0f2fbd64d1d1c683100c5a6796d1ee6fb5e5b3 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 1 Mar 2023 14:06:30 +0530 Subject: [PATCH 02/16] fix: checkpoint link. --- docs/source/en/conceptual/evaluation.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index 47eb2f1bad73..6412c75c0bc6 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -77,7 +77,7 @@ images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generato ![parti-prompts-14](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts-14.png) -We can also set `num_images_per_prompt` accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint ([v1-5](https://www.notion.so/Evaluating-Diffusion-Models-1bda120305cb43ddba2fcebaff5497fc)), yields: +We can also set `num_images_per_prompt` accordingly to compare different images for the same prompt. Running the same pipeline but with a different checkpoint ([v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)), yields: ![parti-prompts-15](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts-15.png) From 3ec87584f4bcbb0de2bc80c9051fbfa3fe874da1 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 1 Mar 2023 14:51:24 +0530 Subject: [PATCH 03/16] Apply suggestions from code review Co-authored-by: Patrick von Platen Co-authored-by: Kashif Rasul --- docs/source/en/conceptual/evaluation.mdx | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index 6412c75c0bc6..b5b173b85157 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -14,7 +14,7 @@ specific language governing permissions and limitations under the License. Evaluation of generative models like [Stable Diffusion](https://huggingface.co/docs/diffusers/stable_diffusion) is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? -Qualitative evaluation of such models can often be error-prone and might incorrectly influence a decision. However, solely relying on quantitative metrics doesn't give a full picture either. For example, a generative model might provide a lower FID score, but the generated images might still lack quality. So, usually, a combination of both qualitative and quantitative evaluations provides a stronger signal when choosing one model over the other. +Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. However, solely relying on quantitative metrics doesn't give a full picture either. For example, a generative model might provide a lower FID score, but the generated images might still lack quality. So, usually, a combination of both qualitative and quantitative evaluations provides a stronger signal when choosing one model over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside `diffusers`. @@ -96,7 +96,7 @@ In this section, we focus on: One commonly used metric here is the [CLIP score](https://arxiv.org/abs/2104.08718). It measures how well-aligned a pair of image and caption is. The higher the CLIP score, the better it is 🔼 -Let's first load a `StableDiffusionPipeline`: +Let's first load a [`StableDiffusionPipeline`]: ```python from diffusers import StableDiffusionPipeline @@ -148,11 +148,11 @@ print(f"CLIP score: {sd_clip_score}") In the above example, we generated one image per prompt. If we generated multiple images per prompt, we could uniformly sample just one from the pool of generated images. -Now, if we wanted to compare two checkpoints compatible with the `StableDiffusionPipeline` we should pass a generator while calling the pipeline: +Now, if we wanted to compare two checkpoints compatible with the [`StableDiffusionPipeline`] we should pass a generator while calling the pipeline: ```python seed = 0 -generator = torch.Generator(device).manual_seed(seed) +generator = torch.manual_seed(seed) images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="numpy").images ``` @@ -176,7 +176,7 @@ It seems like the [v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) ## Image-conditioned text-to-image generation -In this case, we condition the generation pipeline with an input image as well as a text prompt. Let's take the `[StableDiffusionInstructPix2PixPipeline](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix)`, as an example. It takes an edit instruction as an input prompt and an input image to be edited. +In this case, we condition the generation pipeline with an input image as well as a text prompt. Let's take the `[StableDiffusionInstructPix2PixPipeline]`, as an example. It takes an edit instruction as an input prompt and an input image to be edited. Here is one example: @@ -238,13 +238,13 @@ dataset[idx]["image"] We will first edit the images of our dataset with the edit instruction and compute the directional similarity. -Let's first load the `StableDiffusionInstructPix2PixPipeline`: +Let's first load the [`StableDiffusionInstructPix2PixPipeline`]: ```python from diffusers import StableDiffusionInstructPix2PixPipeline instruct_pix2pix_model_id = "timbrooks/instruct-pix2pix" -instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("timbrooks/instruct-pix2pix", torch_dtype=torch.float16) instruct_pix2pix_model_id, torch_dtype=weight_dtype ).to(device) ``` @@ -391,7 +391,7 @@ We can use these metrics for similar pipelines such as the`[StableDiffusionPix2P ## Class-conditioned image generation -Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k). Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID ([Heusel et al.](https://arxiv.org/abs/1706.08500)). We show how to compute it with the `[DiTPipeline](https://huggingface.co/docs/diffusers/api/pipelines/dit)`, which uses the [DiT model](https://arxiv.org/abs/2212.09748) under the hood. +Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k). Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID ([Heusel et al.](https://arxiv.org/abs/1706.08500)). We show how to compute it with the `[DiTPipeline]`, which uses the [DiT model](https://arxiv.org/abs/2212.09748) under the hood. FID aims to measure how similar are two datasets of images. As per [this resource](https://mmgeneration.readthedocs.io/en/latest/quick_run.html#fid): @@ -501,5 +501,5 @@ As a final step, let's visually inspect the `fake_images` and `real_images`,

real-images
- Fake images. + Real images.

From f8e1bb97944a400aa64eef5671e0da86300a76cb Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 1 Mar 2023 14:52:35 +0530 Subject: [PATCH 04/16] formatting fixes. --- docs/source/en/conceptual/evaluation.mdx | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index b5b173b85157..f3fde72213e7 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -176,7 +176,7 @@ It seems like the [v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) ## Image-conditioned text-to-image generation -In this case, we condition the generation pipeline with an input image as well as a text prompt. Let's take the `[StableDiffusionInstructPix2PixPipeline]`, as an example. It takes an edit instruction as an input prompt and an input image to be edited. +In this case, we condition the generation pipeline with an input image as well as a text prompt. Let's take the [`StableDiffusionInstructPix2PixPipeline`], as an example. It takes an edit instruction as an input prompt and an input image to be edited. Here is one example: @@ -381,7 +381,7 @@ It should be noted that the `StableDiffusionInstructPix2PixPipeline` exposes t We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do `F.cosine_similarity(img_feat_two, img_feat_one)`. For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score. -We can use these metrics for similar pipelines such as the`[StableDiffusionPix2PixZeroPipeline](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix_zero#diffusers.StableDiffusionPix2PixZeroPipeline)`. +We can use these metrics for similar pipelines such as the[`StableDiffusionPix2PixZeroPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix_zero#diffusers.StableDiffusionPix2PixZeroPipeline)`. > Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. @@ -391,7 +391,7 @@ We can use these metrics for similar pipelines such as the`[StableDiffusionPix2P ## Class-conditioned image generation -Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k). Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID ([Heusel et al.](https://arxiv.org/abs/1706.08500)). We show how to compute it with the `[DiTPipeline]`, which uses the [DiT model](https://arxiv.org/abs/2212.09748) under the hood. +Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k). Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID ([Heusel et al.](https://arxiv.org/abs/1706.08500)). We show how to compute it with the [`DiTPipeline`], which uses the [DiT model](https://arxiv.org/abs/2212.09748) under the hood. FID aims to measure how similar are two datasets of images. As per [this resource](https://mmgeneration.readthedocs.io/en/latest/quick_run.html#fid): @@ -481,7 +481,7 @@ print(fake_images.shape) # torch.Size([10, 3, 256, 256]) ``` -Now, we can compute the FID using `[torchmetrics](https://torchmetrics.readthedocs.io/)`. +Now, we can compute the FID using [`torchmetrics`](https://torchmetrics.readthedocs.io/). The lower the FID, the better it is. Several things can influence FID here: From 260e65497fdc547d4d16fa6a3507e2c5e75d1e3a Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 1 Mar 2023 14:57:22 +0530 Subject: [PATCH 05/16] formatting fixes. --- docs/source/en/conceptual/evaluation.mdx | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index f3fde72213e7..2d12ae448ba5 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -243,9 +243,8 @@ Let's first load the [`StableDiffusionInstructPix2PixPipeline`]: ```python from diffusers import StableDiffusionInstructPix2PixPipeline -instruct_pix2pix_model_id = "timbrooks/instruct-pix2pix" -instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained("timbrooks/instruct-pix2pix", torch_dtype=torch.float16) - instruct_pix2pix_model_id, torch_dtype=weight_dtype +instruct_pix2pix_pipeline = StableDiffusionInstructPix2PixPipeline.from_pretrained( + "timbrooks/instruct-pix2pix", torch_dtype=torch.float16 ).to(device) ``` From 98ac72986962d54f3b564131b116570467779a64 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 1 Mar 2023 15:00:27 +0530 Subject: [PATCH 06/16] link to partiprompts dataset on hub. --- docs/source/en/conceptual/evaluation.mdx | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index 2d12ae448ba5..e542838bd6ba 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -46,7 +46,7 @@ PartiPrompts has the following columns: These benchmarks allow for side-by-side human evaluation of different image generation models. Let’s see how we can use `diffusers` on a couple of PartiPrompts. -Here are some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols: +Below we show some prompts sampled across different challenges: Basic, Complex, Linguistic Structures, Imagination, and Writing & Symbols. Here we are using PartiPrompts as a [dataset](https://huggingface.co/datasets/nateraw/parti-prompts). ```python from datasets import load_dataset @@ -70,7 +70,7 @@ Now we can use these prompts to generate some images using Stable Diffusion ([v1 import torch seed = 0 -generator = torch.Generator(device).manual_seed(seed) +generator = torch.manual_seed(seed) images = sd_pipeline(sample_prompts, num_images_per_prompt=1, generator=generator, output_type="numpy").images ``` From 2d2c7e0a2555804d6f71f2464da4814d87d97c26 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 1 Mar 2023 16:15:47 +0530 Subject: [PATCH 07/16] reflect on Pedro's comments. Co-authored-by: Pedro --- docs/source/en/conceptual/evaluation.mdx | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index e542838bd6ba..05cdc620480a 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -174,6 +174,15 @@ print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") It seems like the [v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. + + +By construction, there are some limitations in this score. The captions in the training dataset +were crawled from the web and extracted from `alt` and similar tags associated an image on the internet. +They are not necessarily representative of what a human being would use to describe an image. Hence we +had to "engineer" some prompts here. + + + ## Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let's take the [`StableDiffusionInstructPix2PixPipeline`], as an example. It takes an edit instruction as an input prompt and an input image to be edited. From 0b3d8d6ca37d5bd0265f00a26160a4e02b6a8903 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 1 Mar 2023 16:17:38 +0530 Subject: [PATCH 08/16] Apply suggestions from code review Co-authored-by: Pedro Cuenca --- docs/source/en/conceptual/evaluation.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index 05cdc620480a..375bf2910796 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -393,7 +393,7 @@ We can use these metrics for similar pipelines such as the[`StableDiffusionPix2P > Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. -***Extending metrics like IS, FID, or KID can be difficult*** w***hen the model under evaluation was pre-trained on a large image-captioning dataset (such as the [LAION-5B dataset](https://laion.ai/blog/laion-5b/)). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction.*** +***Extending metrics like IS, FID, or KID can be difficult*** when the model under evaluation was pre-trained on a large image-captioning dataset (such as the [LAION-5B dataset](https://laion.ai/blog/laion-5b/)). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. ***Using the above metrics helps evaluate models that are class-conditioned. For example, [DiT](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/overview). It was pre-trained being conditioned on the ImageNet-1k classes.*** From 915d87215cf43f9f901214b61d327f7adab09b72 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 1 Mar 2023 16:36:54 +0530 Subject: [PATCH 09/16] reflect on Pedro's comments. Co-authored-by: Pedro --- docs/source/en/conceptual/evaluation.mdx | 26 +++++++++++++++++++++++- 1 file changed, 25 insertions(+), 1 deletion(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index 375bf2910796..0a5ef5ab7c85 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -84,6 +84,14 @@ We can also set `num_images_per_prompt` accordingly to compare different images Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For more details on these benchmarks, refer to their respective papers. + + +It is useful to look at some inference samples while a model is training to measure the +training progress. In our [training scripts](https://github.com/huggingface/diffusers/tree/main/examples/), we support this utility with additional support for +logging to TensorBoard and Weights and Biases. + + + ## Quantitative In this section, we focus on: @@ -393,7 +401,7 @@ We can use these metrics for similar pipelines such as the[`StableDiffusionPix2P > Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. -***Extending metrics like IS, FID, or KID can be difficult*** when the model under evaluation was pre-trained on a large image-captioning dataset (such as the [LAION-5B dataset](https://laion.ai/blog/laion-5b/)). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. +***Extending metrics like IS, FID (discussed later), or KID can be difficult*** when the model under evaluation was pre-trained on a large image-captioning dataset (such as the [LAION-5B dataset](https://laion.ai/blog/laion-5b/)). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. ***Using the above metrics helps evaluate models that are class-conditioned. For example, [DiT](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/overview). It was pre-trained being conditioned on the ImageNet-1k classes.*** @@ -500,6 +508,22 @@ The lower the FID, the better it is. Several things can influence FID here: For the last two points, it is, therefore, a good practice to run the evaluation across different seeds and inference steps, and then report an average result. + + +FID results tend to be fragile as they depend on a lot of factors: + +* The specific Inception model used during computation. +* The implementation accuracy of the computation. +* The image format (not the same if we start from PNGs vs JPGs). + +Keeping that in mind, FID is often most useful when comparing similar runs, but it is +hard to to reproduce paper results unless the authors carefully disclose the FID +measurement code. + +These points apply to other related metrics too, such as KID and IS. + + + As a final step, let's visually inspect the `fake_images` and `real_images`, respectively.

From d5d9b1c84afe14dd3a23054aa7b156d419338039 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Mon, 13 Mar 2023 14:03:39 +0530 Subject: [PATCH 10/16] update mention of FID. --- docs/source/en/conceptual/evaluation.mdx | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index 0a5ef5ab7c85..f68b4e52baee 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -14,7 +14,10 @@ specific language governing permissions and limitations under the License. Evaluation of generative models like [Stable Diffusion](https://huggingface.co/docs/diffusers/stable_diffusion) is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? -Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. However, solely relying on quantitative metrics doesn't give a full picture either. For example, a generative model might provide a lower FID score, but the generated images might still lack quality. So, usually, a combination of both qualitative and quantitative evaluations provides a stronger signal when choosing one model over the other. +Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. +However, quantitative metrics don't necessarily correspond to image quality. So, usually, a combination +of both qualitative and quantitative evaluations provides a stronger signal when choosing one model +over the other. In this document, we provide a non-exhaustive overview of qualitative and quantitative methods to evaluate Diffusion models. For quantitative methods, we specifically focus on how to implement them alongside `diffusers`. From dd9df47203a711450c09a32aacdb0617c685581b Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Mon, 13 Mar 2023 14:05:53 +0530 Subject: [PATCH 11/16] Apply suggestions from code review Co-authored-by: Will Berman Co-authored-by: YiYi Xu --- docs/source/en/conceptual/evaluation.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index f68b4e52baee..2396f79e8dd7 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -33,7 +33,7 @@ We cover Diffusion models with the following pipelines: ## Qualitative -Qualitative evaluation typically involves humans that assess the quality of the generated images across various aspects such as compositionality, image-text alignment, spatial relations, etc. To have a uniform ground to assess different models on such aspects, we need to start with some common prompts that allow for varied coverage. Two notable benchmarks in this area are DrawBench and PartiPrompts introduced by [Imagen](https://imagen.research.google/) and [Parti](https://parti.research.google/) respectively. +Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by [Imagen](https://imagen.research.google/) and [Parti](https://parti.research.google/) respectively. From the [official Parti website](https://parti.research.google/): @@ -97,15 +97,15 @@ logging to TensorBoard and Weights and Biases. ## Quantitative -In this section, we focus on: +In this section, we will walk you through how to evaluate three different diffusion pipelines using - CLIP score -- Clip directional similarity +- CLIP directional similarity - FID -## Text-guided image generation +### Text-guided image generation -One commonly used metric here is the [CLIP score](https://arxiv.org/abs/2104.08718). It measures how well-aligned a pair of image and caption is. The higher the CLIP score, the better it is 🔼 +[CLIP score](https://arxiv.org/abs/2104.08718) measures the compatibility of image-caption pairs. Higher CLIP scores imply higher compatibility 🔼. The CLIP score is a quantitative measurement of the qualitative concept "compatibility". Image-caption pair compatibility can also be thought of as the semantic similarity between the image and the caption. CLIP score was found to have high correlation with human judgement. Let's first load a [`StableDiffusionPipeline`]: @@ -194,7 +194,7 @@ had to "engineer" some prompts here. -## Image-conditioned text-to-image generation +### Image-conditioned text-to-image generation In this case, we condition the generation pipeline with an input image as well as a text prompt. Let's take the [`StableDiffusionInstructPix2PixPipeline`], as an example. It takes an edit instruction as an input prompt and an input image to be edited. @@ -408,7 +408,7 @@ We can use these metrics for similar pipelines such as the[`StableDiffusionPix2P ***Using the above metrics helps evaluate models that are class-conditioned. For example, [DiT](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/overview). It was pre-trained being conditioned on the ImageNet-1k classes.*** -## Class-conditioned image generation +### Class-conditioned image generation Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k). Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID ([Heusel et al.](https://arxiv.org/abs/1706.08500)). We show how to compute it with the [`DiTPipeline`], which uses the [DiT model](https://arxiv.org/abs/2212.09748) under the hood. From c3ab01e93849a7ce1c12f15d4b14732b80263582 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Mon, 13 Mar 2023 14:07:04 +0530 Subject: [PATCH 12/16] minor nit. --- docs/source/en/conceptual/evaluation.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index 2396f79e8dd7..8da147cf61a2 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -97,7 +97,7 @@ logging to TensorBoard and Weights and Biases. ## Quantitative -In this section, we will walk you through how to evaluate three different diffusion pipelines using +In this section, we will walk you through how to evaluate three different diffusion pipelines using: - CLIP score - CLIP directional similarity From 2a6aecbccaef2d37d96e90cd32ff172aa77cd93a Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Mon, 13 Mar 2023 15:09:04 +0530 Subject: [PATCH 13/16] finish edges and add colab notebook. --- docs/source/en/conceptual/evaluation.mdx | 28 ++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index 8da147cf61a2..d7daf441f8a6 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -12,6 +12,10 @@ specific language governing permissions and limitations under the License. # Evaluating Diffusion Models + + Open In Colab + + Evaluation of generative models like [Stable Diffusion](https://huggingface.co/docs/diffusers/stable_diffusion) is subjective in nature. But as practitioners and researchers, we often have to make careful choices amongst many different possibilities. So, when working with different generative models (like GANs, Diffusion, etc.), how do we choose one over the other? Qualitative evaluation of such models can be error-prone and might incorrectly influence a decision. @@ -56,9 +60,9 @@ from datasets import load_dataset prompts = load_dataset("nateraw/parti-prompts", split="train") prompts = prompts.shuffle() -samples_prompts = [prompts[i]["Prompt"] for i in range(5)] +sample_prompts = [prompts[i]["Prompt"] for i in range(5)] -samples_prompts = [ +sample_prompts = [ "a corgi", "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", "a car with no windows", @@ -159,7 +163,8 @@ print(f"CLIP score: {sd_clip_score}") In the above example, we generated one image per prompt. If we generated multiple images per prompt, we could uniformly sample just one from the pool of generated images. -Now, if we wanted to compare two checkpoints compatible with the [`StableDiffusionPipeline`] we should pass a generator while calling the pipeline: +Now, if we wanted to compare two checkpoints compatible with the [`StableDiffusionPipeline`] we should pass a generator while calling the pipeline. First, we generate images with a +fixed seed with the [v1-4 Stable Diffusion checkpoint](https://huggingface.co/CompVis/stable-diffusion-v1-4): ```python seed = 0 @@ -168,6 +173,8 @@ generator = torch.manual_seed(seed) images = sd_pipeline(prompts, num_images_per_prompt=1, generator=generator, output_type="numpy").images ``` +Then we load the [v1-5 checkpoint](https://huggingface.co/runwayml/stable-diffusion-v1-5) to generate images: + ```python model_ckpt_1_5 = "runwayml/stable-diffusion-v1-5" sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_dtype=weight_dtype).to(device) @@ -175,6 +182,8 @@ sd_pipeline_1_5 = StableDiffusionPipeline.from_pretrained(model_ckpt_1_5, torch_ images_1_5 = sd_pipeline_1_5(prompts, num_images_per_prompt=1, generator=generator, output_type="numpy").images ``` +And finally, we compare their CLIP scores: + ```python sd_clip_score_1_4 = calculate_clip_score(images, prompts) print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") @@ -468,7 +477,7 @@ print(real_images.shape) # torch.Size([10, 3, 256, 256]) ``` -We now load the `DiTPipeline` to generate images conditioned on the above-mentioned classes. +We now load the [`DiTPipeline`](https://huggingface.co/docs/diffusers/api/pipelines/dit) to generate images conditioned on the above-mentioned classes. ```python from diffusers import DiTPipeline, DPMSolverMultistepScheduler @@ -502,6 +511,17 @@ print(fake_images.shape) Now, we can compute the FID using [`torchmetrics`](https://torchmetrics.readthedocs.io/). +```python +from torchmetrics.image.fid import FrechetInceptionDistance + +fid = FrechetInceptionDistance(normalize=True) +fid.update(real_images, real=True) +fid.update(fake_images, real=False) + +print(f"FID: {float(fid.compute())}") +# FID: 177.7147216796875 +``` + The lower the FID, the better it is. Several things can influence FID here: - Number of images (both real and fake) From 5dc86bf6fa4595ebd0ca77da508e237690557abf Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 15 Mar 2023 15:47:59 +0530 Subject: [PATCH 14/16] Apply suggestions from code review Co-authored-by: Pedro Cuenca --- docs/source/en/conceptual/evaluation.mdx | 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index d7daf441f8a6..a56c8519d5b5 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -35,7 +35,7 @@ We cover Diffusion models with the following pipelines: - Text-guided image generation, additionally conditioned on an input image (such as the [`StableDiffusionImg2ImgPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/img2img), and [`StableDiffusionInstructPix2PixPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix)). - Class-conditioned image generation models (such as the [`DiTPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/dit)). -## Qualitative +## Qualitative Evaluation Qualitative evaluation typically involves human assessment of generated images. Quality is measured across aspects such as compositionality, image-text alignment, and spatial relations. Common prompts provide a degree of uniformity for subjective metrics. DrawBench and PartiPrompts are prompt datasets used for qualitative benchmarking. DrawBench and PartiPrompts were introduced by [Imagen](https://imagen.research.google/) and [Parti](https://parti.research.google/) respectively. @@ -89,17 +89,17 @@ We can also set `num_images_per_prompt` accordingly to compare different images ![parti-prompts-15](https://huggingface.co/datasets/diffusers/docs-images/resolve/main/evaluation_diffusion_models/parti-prompts-15.png) Once several images are generated from all the prompts using multiple models (under evaluation), these results are presented to human evaluators for scoring. For -more details on these benchmarks, refer to their respective papers. +more details on the DrawBench and PartiPrompts benchmarks, refer to their respective papers. It is useful to look at some inference samples while a model is training to measure the training progress. In our [training scripts](https://github.com/huggingface/diffusers/tree/main/examples/), we support this utility with additional support for -logging to TensorBoard and Weights and Biases. +logging to TensorBoard and Weights & Biases. -## Quantitative +## Quantitative Evaluation In this section, we will walk you through how to evaluate three different diffusion pipelines using: @@ -118,9 +118,7 @@ from diffusers import StableDiffusionPipeline import torch model_ckpt = "CompVis/stable-diffusion-v1-4" -device = "cuda" -weight_dtype = torch.float16 -sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=weight_dtype).to(device) +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype= torch.float16).to("cuda") ``` Generate some images with multiple prompts: @@ -458,7 +456,7 @@ image_paths = sorted([os.path.join(dataset_path, x) for x in os.listdir(dataset_ real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths] ``` -These images are from the following Imagenet-1k classes: "cassette_player", "chain_saw", "church", "gas_pump", "parachute", and "tench". +These are 10 images from the following Imagenet-1k classes: "cassette_player", "chain_saw" (x2), "church", "gas_pump" (x3), "parachute" (x2), and "tench". Now that the images are loaded, let's apply some lightweight pre-processing on them to use them for FID calculation. From 17f08266be084830b40bbd8055d136fd85d7bdc7 Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 15 Mar 2023 16:02:24 +0530 Subject: [PATCH 15/16] run formatting. --- docs/source/en/conceptual/evaluation.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index a56c8519d5b5..89e749fbd23e 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -118,7 +118,7 @@ from diffusers import StableDiffusionPipeline import torch model_ckpt = "CompVis/stable-diffusion-v1-4" -sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype= torch.float16).to("cuda") +sd_pipeline = StableDiffusionPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda") ``` Generate some images with multiple prompts: From a3f9c5145892292f9f812e82f8004449ac87965a Mon Sep 17 00:00:00 2001 From: Sayak Paul Date: Wed, 15 Mar 2023 16:49:02 +0530 Subject: [PATCH 16/16] additional feedback. --- docs/source/en/conceptual/evaluation.mdx | 31 +++++++++++++++--------- 1 file changed, 19 insertions(+), 12 deletions(-) diff --git a/docs/source/en/conceptual/evaluation.mdx b/docs/source/en/conceptual/evaluation.mdx index 89e749fbd23e..98821010e203 100644 --- a/docs/source/en/conceptual/evaluation.mdx +++ b/docs/source/en/conceptual/evaluation.mdx @@ -58,10 +58,11 @@ Below we show some prompts sampled across different challenges: Basic, Complex, ```python from datasets import load_dataset -prompts = load_dataset("nateraw/parti-prompts", split="train") -prompts = prompts.shuffle() -sample_prompts = [prompts[i]["Prompt"] for i in range(5)] +# prompts = load_dataset("nateraw/parti-prompts", split="train") +# prompts = prompts.shuffle() +# sample_prompts = [prompts[i]["Prompt"] for i in range(5)] +# Fixing these sample prompts in the interest of reproducibility. sample_prompts = [ "a corgi", "a hot air balloon with a yin-yang symbol, with the moon visible in the daytime sky", @@ -159,7 +160,7 @@ print(f"CLIP score: {sd_clip_score}") # CLIP score: 35.7038 ``` -In the above example, we generated one image per prompt. If we generated multiple images per prompt, we could uniformly sample just one from the pool of generated images. +In the above example, we generated one image per prompt. If we generated multiple images per prompt, we would have to take the average score from the generated images per prompt. Now, if we wanted to compare two checkpoints compatible with the [`StableDiffusionPipeline`] we should pass a generator while calling the pipeline. First, we generate images with a fixed seed with the [v1-4 Stable Diffusion checkpoint](https://huggingface.co/CompVis/stable-diffusion-v1-4): @@ -185,9 +186,11 @@ And finally, we compare their CLIP scores: ```python sd_clip_score_1_4 = calculate_clip_score(images, prompts) print(f"CLIP Score with v-1-4: {sd_clip_score_1_4}") +# CLIP Score with v-1-4: 34.9102 sd_clip_score_1_5 = calculate_clip_score(images_1_5, prompts) print(f"CLIP Score with v-1-5: {sd_clip_score_1_5}") +# CLIP Score with v-1-5: 36.2137 ``` It seems like the [v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) checkpoint performs better than its predecessor. Note, however, that the number of prompts we used to compute the CLIP scores is quite low. For a more practical evaluation, this number should be way higher, and the prompts should be diverse. @@ -409,7 +412,11 @@ We can extend the idea of this metric to measure how similar the original image We can use these metrics for similar pipelines such as the[`StableDiffusionPix2PixZeroPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix_zero#diffusers.StableDiffusionPix2PixZeroPipeline)`. -> Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. + + +Both CLIP score and CLIP direction similarity rely on the CLIP model, which can make the evaluations biased. + + ***Extending metrics like IS, FID (discussed later), or KID can be difficult*** when the model under evaluation was pre-trained on a large image-captioning dataset (such as the [LAION-5B dataset](https://laion.ai/blog/laion-5b/)). This is because underlying these metrics is an InceptionNet (pre-trained on the ImageNet-1k dataset) used for extracting intermediate image features. The pre-training dataset of Stable Diffusion may have limited overlap with the pre-training dataset of InceptionNet, so it is not a good candidate here for feature extraction. @@ -417,7 +424,7 @@ We can use these metrics for similar pipelines such as the[`StableDiffusionPix2P ### Class-conditioned image generation -Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k). Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID ([Heusel et al.](https://arxiv.org/abs/1706.08500)). We show how to compute it with the [`DiTPipeline`], which uses the [DiT model](https://arxiv.org/abs/2212.09748) under the hood. +Class-conditioned generative models are usually pre-trained on a class-labeled dataset such as [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k). Popular metrics for evaluating these models include Fréchet Inception Distance (FID), Kernel Inception Distance (KID), and Inception Score (IS). In this document, we focus on FID ([Heusel et al.](https://arxiv.org/abs/1706.08500)). We show how to compute it with the [`DiTPipeline`](https://huggingface.co/docs/diffusers/api/pipelines/dit), which uses the [DiT model](https://arxiv.org/abs/2212.09748) under the hood. FID aims to measure how similar are two datasets of images. As per [this resource](https://mmgeneration.readthedocs.io/en/latest/quick_run.html#fid): @@ -458,6 +465,11 @@ real_images = [np.array(Image.open(path).convert("RGB")) for path in image_paths These are 10 images from the following Imagenet-1k classes: "cassette_player", "chain_saw" (x2), "church", "gas_pump" (x3), "parachute" (x2), and "tench". +

+ real-images
+ Real images. +

+ Now that the images are loaded, let's apply some lightweight pre-processing on them to use them for FID calculation. ```python @@ -545,14 +557,9 @@ These points apply to other related metrics too, such as KID and IS. -As a final step, let's visually inspect the `fake_images` and `real_images`, respectively. +As a final step, let's visually inspect the `fake_images`.

fake-images
Fake images.

- -

- real-images
- Real images. -