Skip to content
This repository was archived by the owner on Feb 7, 2025. It is now read-only.
This repository was archived by the owner on Feb 7, 2025. It is now read-only.

Should we add xformers efficient memory attention mechanisms? #135

@Warvito

Description

@Warvito

In the recent stable diffusion 2.0 (https://github.com/Stability-AI/stablediffusion) and in the huggingface diffusers (https://github.com/huggingface/diffusers), they use the memory efficient attention from xformers (https://github.com/facebookresearch/xformers). Should we try to adopt the same on our diffusion models (or even transformers?)

Metadata

Metadata

Assignees

Labels

questionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions