-
Notifications
You must be signed in to change notification settings - Fork 6.5k
Description
Is your feature request related to a problem? Please describe.
There are examples of long prompts on the web, but many people are unaware that long strings are truncated.
However, there is no way to know wheter StableDifusionPipelineOutputprompt truncated long prompt or not.
Long prompts are truncated here implicitly.
diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
Lines 175 to 181 in f6fb328
| text_input = self.tokenizer( | |
| prompt, | |
| padding="max_length", | |
| max_length=self.tokenizer.model_max_length, | |
| truncation=True, | |
| return_tensors="pt", | |
| ) |
Describe the solution you'd like
Returns a bool value indicating whether it has been truncated or not.
It is more convenient to return a tokenized prompt as well.
diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion.py
Lines 276 to 279 in f6fb328
| if not return_dict: | |
| return (image, has_nsfw_concept) | |
| return StableDiffusionPipelineOutput(images=image, nsfw_content_detected=has_nsfw_concept) |
Describe alternatives you've considered
Additional context
In my app, after generating images, it tokenizes prompts separately to check whether they has been truncated or not.
However, this is a waste to tokenize them again.