Skip to content

Conversation

@patrickvonplaten
Copy link
Contributor

@patrickvonplaten patrickvonplaten commented Oct 17, 2022

Let's try to advocate as much as possible to load schedulers directly from the saved configs instead of having to define the parameters manually.

I.e. we don't want to see any:

scheduler = LMSDiscreteScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000)

but instead:

scheduler = LMSDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")

This PR enables this functionality across "compatible" schedulers by essentially doing the following:

  • It allows configuration files to have parameters that are not expected by the "original" Scheduler class but that are expected by one or more of the compatible scheduler classes
  • It allows to load configuration files to be loaded into compatible scheduler classes

By "allows" it's meant that no warning is thrown.
The necessary logic for this is implemented in configuration_utils.py . Please have a look into the tests/test_config.py file to understand what the new logic can do.

@patrickvonplaten patrickvonplaten requested review from anton-l, patil-suraj and pcuenca and removed request for anton-l and pcuenca October 17, 2022 21:54
@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Oct 17, 2022

The documentation is not available anymore as the PR was closed or merged.

from diffusers import StableDiffusionPipeline, DDIMScheduler

scheduler = DDIMScheduler(beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", clip_sample=False, set_alpha_to_one=False)
scheduler = DDIMScheduler.from_config("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

scheduler_config does not specify values for clip_sample or set_alpha_to_one

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point! Wonder if we could somehow do this automatically - maybe just add those values there?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually opening a bigger issue for this

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch! Yeah I noticed it's acutally not that easy - we'll need some more code changes here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be correct now by simply setting clip_sample in the original configuration file - even if it's not expected by the scheduler => the original PNDMScheduler config should therefore just have clip_sample set to False for DDIM. This logic is now implemented via the class attribute _compatible_classes.

new_config["steps_offset"] = 1
scheduler._internal_dict = FrozenDict(new_config)

if hasattr(scheduler.config, "clip_sample") and scheduler.config.clip_sample is True:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As soon as the 0.7.0 is out, we will add clip_sample to https://huggingface.co/runwayml/stable-diffusion-v1-5 so that the default scheduler config is 1-to-1 compatible with all other schedulers

Copy link
Contributor

@patil-suraj patil-suraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very clean and cool, thanks a lot for working on this!

Copy link
Member

@pcuenca pcuenca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Amazing, and extremely useful! I just left some nits.

@patrickvonplaten patrickvonplaten merged commit c18941b into main Oct 31, 2022
@patrickvonplaten patrickvonplaten deleted the better_docs_schedulers branch October 31, 2022 16:26
Copy link
Member

@pcuenca pcuenca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

noise_scheduler = DDPMScheduler(
beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", num_train_timesteps=1000
)
noise_scheduler = DDPMScheduler.from_config("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is both a config value and conflicts with --pretrained_model_name_or_path, this was previously compatible with local and remote sd1.4 and sd1.5 in dreambooth, text_to_image, and textual inversion.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch @kjerk , would you like to open a PR to fix this ?

yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
…face#890)

* [Better scheduler docs] Improve usage examples of schedulers

* finish

* fix warnings and add test

* finish

* more replacements

* adapt fast tests hf token

* correct more

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <[email protected]>

* Integrate compatibility with euler

Co-authored-by: Pedro Cuenca <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants