Skip to content

Conversation

@pcuenca
Copy link
Member

@pcuenca pcuenca commented Oct 7, 2022

Fixes #760.

repeat_interleave was slowing things down, but not causing wrong outputs. Something between v0.3.0 and v0.4.0 (possibly some of the optimization changes) broke generation in mps. Fortunately, the latest mps version of PyTorch nightly (released today) works fine 😅, so I'm updating the documentation to recommend it.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Oct 7, 2022

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@patil-suraj patil-suraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, just left couple of comments.

@pcuenca pcuenca requested a review from patil-suraj October 7, 2022 19:12
@pcuenca
Copy link
Member Author

pcuenca commented Oct 7, 2022

Changes applied, I think this is ready to go @patil-suraj @patrickvonplaten

@pcuenca
Copy link
Member Author

pcuenca commented Oct 8, 2022

Hmm, the test is failing, I'll verify it locally.

Copy link
Contributor

@patil-suraj patil-suraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to merge for me once the tests pass.

- The unconditional embeddings are always created from a single prompt.
- I was shadowing the batch_size var.
@pcuenca pcuenca merged commit 24b8b5c into main Oct 11, 2022
@pcuenca pcuenca deleted the mps-repeat-interleave branch October 11, 2022 18:30
patrickvonplaten added a commit that referenced this pull request Oct 11, 2022
* mps: alt. implementation for repeat_interleave

* style

* Bump mps version of PyTorch in the documentation.

* Apply suggestions from code review

Co-authored-by: Suraj Patil <[email protected]>

* Simplify: do not check for device.

* style

* Fix repeat dimensions:

- The unconditional embeddings are always created from a single prompt.
- I was shadowing the batch_size var.

* Split long lines as suggested by Suraj.

Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Suraj Patil <[email protected]>
prathikr pushed a commit to prathikr/diffusers that referenced this pull request Oct 26, 2022
…e#766)

* mps: alt. implementation for repeat_interleave

* style

* Bump mps version of PyTorch in the documentation.

* Apply suggestions from code review

Co-authored-by: Suraj Patil <[email protected]>

* Simplify: do not check for device.

* style

* Fix repeat dimensions:

- The unconditional embeddings are always created from a single prompt.
- I was shadowing the batch_size var.

* Split long lines as suggested by Suraj.

Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Suraj Patil <[email protected]>
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
…e#766)

* mps: alt. implementation for repeat_interleave

* style

* Bump mps version of PyTorch in the documentation.

* Apply suggestions from code review

Co-authored-by: Suraj Patil <[email protected]>

* Simplify: do not check for device.

* style

* Fix repeat dimensions:

- The unconditional embeddings are always created from a single prompt.
- I was shadowing the batch_size var.

* Split long lines as suggested by Suraj.

Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Suraj Patil <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

StableDiffusionPipeline producing unexpected output with MPS device using diffusers==0.4.0

5 participants