Reusing Computation in Text-to-Image Diffusion for Efficient Image Generation
12 days ago
- #computational efficiency
- #text-to-image
- #diffusion models
- Text-to-image diffusion models are computationally expensive.
- Proposes a method to reduce redundancy across correlated prompts by clustering semantically similar prompts and sharing computation in early diffusion steps.
- Leverages the coarse-to-fine nature of diffusion models where early steps capture shared structures among similar prompts.
- Training-free approach that works with models conditioned on image embeddings.
- Significantly reduces compute cost while improving image quality.
- Integrates seamlessly with existing pipelines and scales with prompt sets.
- Reduces environmental and financial burden of large-scale text-to-image generation.