Skip to content

dreambooth lora: text encoder monkey-patching not working #3445

@rvorias

Description

@rvorias

Describe the bug

In the dreambooth lora script, the lora layers are monkey patched into the text encoder as the text encoder comes from transformers. However, this has some undesirable results:

image

This is before any weight is updated. Vertically, the images share the same exact seed.

I suspect these lines of code:

def new_forward(x):
return old_forward(x) + lora_layer(x)

Commenting out the lora term produces the same distortions. This means that wrapping the original ref to the forward function and injecting it via monkey patching produces extra undesirable results.

My current guess is that this is some pass-by-ref issue.

Reproduction

train_text_encoder: True and check images at epoch=0 before training.

Logs

No response

System Info

Python 3.8.10

accelerate==0.19.0
diffusers @ git+https://github.com/huggingface/diffusers.git@7a32b6beeb0cfdefed645253dce23d9b0a78597f
transformers==4.29.1

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingstaleIssues that haven't received updates

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions