Skip to content

Conversation

@pcuenca
Copy link
Member

@pcuenca pcuenca commented Sep 9, 2022

This affects add_noise, so the image to image tasks.
See #358.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Sep 9, 2022

The documentation is not available anymore as the PR was closed or merged.

Copy link
Contributor

@patil-suraj patil-suraj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me!
But looking at the recent comments in the linked issue it seems the issue is resolve from PyTorch side. Do we still need this fix ?

@pcuenca
Copy link
Member Author

pcuenca commented Sep 13, 2022

Yes, we still need it for #239. Sorry for the confusion, there were multiple issues open for the image 2 image task and I should have referenced #239 instead of the other one.

@pcuenca pcuenca requested a review from patil-suraj September 13, 2022 09:24
noise: Union[torch.FloatTensor, np.ndarray],
timesteps: Union[torch.IntTensor, np.ndarray],
) -> Union[torch.FloatTensor, np.ndarray]:
timesteps = timesteps.to(self.alphas_cumprod.device)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This breaks the assumption that we can work with numpy arrays as well :(
Maybe we could add a device argument to scheduler.set_format() and call if from pipeline.to()?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather minimize the changes across files just for this particular case that only affects mps. How about some defensive coding like:

  • Check the format is pt
  • And alphas_cumprod is actually a tensor

Then move using to() ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think we'll move away from set_format() since we'll soon have framework dependent schedulers - so I think this is ok here

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that we'll soon have framework-specific schedulers I think we can go full PyTorch on the existing ones

@pcuenca
Copy link
Member Author

pcuenca commented Sep 14, 2022

Given that we'll soon have framework-specific schedulers I think we can go full PyTorch on the existing ones

Makes total sense to me. I'll verify that the inputs are actually torch tensors for now, and we can maybe remove the np type hints and support in another PR?

@pcuenca pcuenca requested a review from anton-l September 14, 2022 09:12
Copy link
Member

@anton-l anton-l left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, this makes sense if we'll have framework-specific schedulers, all good then!

@pcuenca pcuenca merged commit 1a69c6f into main Sep 14, 2022
@pcuenca pcuenca deleted the mps-scheduler-indexing branch September 14, 2022 12:33
PhaneeshB pushed a commit to nod-ai/diffusers that referenced this pull request Mar 1, 2023
* added ordered benchmarks to dispatch benchmarking tool

* saved changes

* updated readme

Co-authored-by: Elias Joseph <[email protected]>
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
* Fix LMS scheduler indexing in `add_noise` huggingface#358.

* Fix DDIM and DDPM indexing with mps device.

* Verify format is PyTorch before using `.to()`
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants