Skip to content

malthee/evolutionary-diffusion

Repository files navigation

Evolutionary-Diffusion

Combining Evolutionary Computing with Diffusion Models

Images

Audio

Try it out in Google Colab

Notebook Link
Genetic Algorithm Genetic Algorithm
Island Genetic Algorithm Island Genetic Algorithm
NSGA Genetic Algorithm

Image results will be saved in your Google Drive in the folder evolutionary. Each generation creates a new folder where the images will be saved in. You can change the folders in the notebook.

Sometimes Google Collab causes dependency problems which break the notebook. If you have any issues executing this in a Collab environment, please do not hesitate to create a new issue.

Running locally

Optionally but recommended to use a venv. Clone the repo or download the .zip, then install the dependencies via:

pip install -e ".[all]"Β 

Now you are ready to go with the notebooks or custom code. CUDA and MPS are supported.

Example - Creating the most Aesthetic Image

Optimizing for Aesthetics using the Aesthetics Predictor V2 from LAION with a GA and SDXL-Turbo

Optimizing the aesthetics predictor as a maximization problem, the algorithm came to a max Aesthetics score of 8.67. This score is higher than the examples from the real LAION English Subset dataset have, with the red line showing the limit. A wide variety of prompts (inspired by parti prompts) was used for the initial population.

ga_200gen_100pop_aesthetic.mp4

Ga200Gen100PopFitnessChartAesthetics

Parameters:

population_size = 100
num_generations = 200
batch_size = 1
elitism = 1

creator = SDXLPromptEmbeddingImageCreator(pipeline_factory=setup_pipeline, batch_size=batch_size, inference_steps=3)
evaluator = AestheticsImageEvaluator()  
crossover = PooledArithmeticCrossover(0.5, 0.5)
mutation_arguments = UniformGaussianMutatorArguments(mutation_rate=0.1, mutation_strength=2, clamp_range=(-900, 900)) 
mutation_arguments_pooled = UniformGaussianMutatorArguments(mutation_rate=0.1, mutation_strength=0.3, clamp_range=(-8, 8))
mutator = PooledUniformGaussianMutator(mutation_arguments, mutation_arguments_pooled)
selector = TournamentSelector(tournament_size=3)

Example - Island GA with Artists on each Island

Performing an Island GA by creating random embeddings and mixing them with artist embeddings to get mixtures of styles and new ideas.

Mark Rothko chairs Sketching Person Picasso Dali Angles Crazy Landscape Van Gogh
Character Walls Unique Pattern Colorful Woman Butterfly Landscape Green Car City

More images

Example - Improving Audiobox Aesthetics Score

Starting from noisy random samples, evolving to better sounds. Using the sum of all fitness criteria Audiobox Aesthetics offers.

example_fitness_14.mp4
example_fitness_31.mp4

Detailed Results and Notebooks

More detailed results can be found in a separate repository dedicated to the results of the experiments: https://github.com/malthee/evolutionary-diffusion-results

Evaluators

Image Creators

Current supported creators working in the prompt embedding space:

  • SDXLPromptEmbeddingImageCreator: Supports the SDXL pipeline, creates both prompt- and pooled-prompt-embeddings.
  • SDPromptEmbeddingImageCreator: Only has prompt-embeddings, is faster but produces less quality results than SDXL.

Audio Creators

Supporting only AudioLDM because it works simply on the CLAP embedding space (suitable for this kind of operation). Other embeddings have shown to not work well with evolutionary operations (like the T5 encoder for example)

  • AudioLDMSoundCreator: Works with any AudioLDMPipeline, default is audioldm-l-full

Package Structure and Base Classes

Package Diagram

Solution Candidate Class Diagram

(Pre-Testing) Evaluating Models for Evolutionary use

There are multiple notebooks exploring the speed and quality of models for generation and fitness-evaluation. These notebooks also allow for simple inference so that any model can be tried out easily.

  • diffusion_model_comparison: tries out different diffusion models with varying arguments (inference steps, batch size) to find out the optimal model for image generation in an evolutionary context (generation speed & quality)
  • clip_evaluators: uses torch metrics with CLIPScore and CLIP IQA. CLIPScore could define the fitness for "prompt fulfillment" or "image alignment" while CLIP IQA has many possible metrics like "quality, brightness, happiness..."
  • ai_detection_evaluator: uses a pre-trained model for AI image detection. This could be a fitness criteria to minimize "AI-likeness" in images.
  • aesthetics_evaluator: uses a pre-trained model from the maintainers of the LAION image dataset, which scores an image 0-10 depending on how "aesthetic" it is. Could be used as a maximization criteria for the fitness of images.
  • clamp_range: testing the usual prompt-embedding min and max values for different models, so that a CLAMP range can be set in the mutator for example. Using the parti prompts.
  • crossover_mutation_experiments: testing different crossover and mutation strategies to see how they work in the prompt embedding space
  • embedding_relations: experimenting with TensorBoard and integrating it into our embedding model