Skip to content

Conversation

@timh
Copy link
Contributor

@timh timh commented Dec 8, 2022

same code changes as PR #1567, but with a proper branch name now, so the merge commit is nicer :)

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Dec 8, 2022

The documentation is not available anymore as the PR was closed or merged.

Copy link
Member

@pcuenca pcuenca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was already reviewed by @patil-suraj in #1567, and it looks good to me! I just suggested a minor rewording of the comment.

@patrickvonplaten
Copy link
Contributor

Thanks @timh!

@patil-suraj @pcuenca @williamberman let's not forgot to remove this when accelerate is forced to be a newer version

@patrickvonplaten patrickvonplaten merged commit 2868d99 into huggingface:main Dec 10, 2022
@timh timh deleted the fix-1566-dreambooth branch December 11, 2022 00:19
tcapelle pushed a commit to tcapelle/diffusers that referenced this pull request Dec 12, 2022
… checkpoint to avoid crash when running fp16 (huggingface#1618)

* dreambooth: fix huggingface#1566: maintain fp32 wrapper when saving a checkpoint to avoid crash when running fp16

* dreambooth: guard against passing keep_fp32_wrapper arg to older versions of accelerate. part of fix for huggingface#1566

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <[email protected]>

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
sliard pushed a commit to sliard/diffusers that referenced this pull request Dec 21, 2022
… checkpoint to avoid crash when running fp16 (huggingface#1618)

* dreambooth: fix huggingface#1566: maintain fp32 wrapper when saving a checkpoint to avoid crash when running fp16

* dreambooth: guard against passing keep_fp32_wrapper arg to older versions of accelerate. part of fix for huggingface#1566

* Apply suggestions from code review

Co-authored-by: Pedro Cuenca <[email protected]>

* Update examples/dreambooth/train_dreambooth.py

Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Pedro Cuenca <[email protected]>
rafaelgm referenced this pull request in ShivamShrirao/diffusers Jan 6, 2023
When using mixed precision and trying to save weights every N steps I was getting this error after the first save step:

RuntimeError: Input type (struct c10::Half) and bias type (float) should be the same

Adding keep_fp32_wrapper=True to the two unwrap_model calls on save_weights seems to fix the issue.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants