You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Say you want to use CustomDiffusion for some layers, and LoRA for some others.
You want to pass a {'scale': 0.5} to LoRA.
Then the code goes:
Because CustomDiffusion has no idea what this parameter will do.
Describe the solution you'd like
The easiest solution is to drop excess kwargs for implemented attention processors. The downside is that silent bugs may come up.
Perhaps implementing a flag to indicate whether excess kwargs are expected or not. Downside of this fix is that it looks a bit too ad-hoc.
Add support for attn kwargs that also specify the layers or attn-proc types affected. The downside is a bit complicated design and also is a lot of work, possibly need to modify every pipeline.
Additional context
I see a similar issue in this comment: #1639 (comment)
But it did not get enough attention.