Hi, thanks for the great work.
I want to know if there is any way to load the new concept obtained by fine-tuned (whether it is textual inversion, dreambooth or others) into the inpainting model? As shown in the textual inversion paper: replace the iphone in the hands of Steve Jobs with .
I only saw examples of the finetune methods (TI, dreambooth) and the inference example of inpainting in this repo , but don't know how the inpainting pipeline loads the embedding obtained from textual inversion finetune. Please help me