You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Feb 7, 2025. It is now read-only.
In the recent stable diffusion 2.0 (https://github.com/Stability-AI/stablediffusion) and in the huggingface diffusers (https://github.com/huggingface/diffusers), they use the memory efficient attention from xformers (https://github.com/facebookresearch/xformers). Should we try to adopt the same on our diffusion models (or even transformers?)