Q-Diffusion: Quantizing Diffusion Models

Xiuyu Li 1
Yijiang Liu 2
Long Lian 1
Huanrui Yang 1
Zhen Dong 1
Daniel Kang 1 3
Shanghang Zhang 4
Kurt Keutzer 1
1 UC Berkeley
2 Nanjing University
3 University of Illinois Urbana-Champaign
4 Peking University

ICCV 2023

[Paper]
[GitHub]

Abstract

Diffusion models have achieved great success in image synthesis through iterative noise estimation using deep neural networks. However, the slow inference, high memory consumption, and computation intensity of the noise estimation model hinder the efficient adoption of diffusion models. Although post-training quantization (PTQ) is considered a go-to compression method for other tasks, it does not work out-of-the-box on diffusion models. We propose a novel PTQ method specifically tailored towards the unique multi-timestep pipeline and model architecture of the diffusion models, which compresses the noise estimation network to accelerate the generation process. We identify the key difficulty of diffusion model quantization as the changing output distributions of noise estimation networks over multiple time steps and the bimodal activation distribution of the shortcut layers within the noise estimation network. We tackle these challenges with time step-aware calibration and shortcut-splitting quantization in this work. Experimental results show that our proposed method is able to quantize full-precision unconditional diffusion models into 4-bit while maintaining comparable performance (small FID change of at most 2.34 compared to >100 for traditional PTQ) in a training-free manner. Our approach can also be applied to text-guided image generation, where we can run stable diffusion in 4-bit weights with high generation quality for the first time.


Time Step-aware Calibration Data Sampling

Traditional PTQ inference only goes through the quantized model once, while diffusion models quantization needs to address the accumulated quantization errors and varying activation distributions in the multi-time step inference. Addtionally, conventional PTQ approaches usually either use synthetic data that contains inconsistency with real inputs during the inference time or are not data-free. Our Q-Diffusion samples the calibration dataset with inputs uniformly from multiple time steps, which are an accurate reflection of the production data in a data-free manner.
Activation ranges across all time steps of FP32 DDIM model on CIFAR-10. The activation distributions gradually change, with neighboring time steps being similar and distant ones being distinctive

Shortcut-splitting Quantization

The activation distributions of the concatenated deep feature channels (X1) and shallow feature channels (X2) in shortcuts differ significantly, resulting in a bimodal weight distribution that is challenging to quantize. We propose a "split" quantization scheme that performs quantization prior to concatenation, requiring negligible additional memory or compute.

Results: Unconditional Generation


LSUN results using Q-Diffusion and Linear Quantization (round-to-nearest), Latent Diffusion


Results: Text-guided Image Generation


txt2img results using Q-Diffusion and Linear Quantization (round-to-nearest), Stable Diffusion v1.4


Paper

X. Li, Y. Liu, L. Lian, H. Yang, Z. Dong, D. Kang, S. Zhang, and K. Keutzer
Q-Diffusion: Quantizing Diffusion Models
arXiv:2302.04304, 2023.



Citation

@InProceedings{li2023qdiffusion,
  author={Li, Xiuyu and Liu, Yijiang and Lian, Long and Yang, Huanrui and Dong, Zhen and Kang, Daniel and Zhang, Shanghang and Keutzer, Kurt},
  title={Q-Diffusion: Quantizing Diffusion Models},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  month={October},
  year={2023},
  pages={17535-17545}
}

Acknowledgements

We thank Berkeley Deep Drive, Intel Corporation, Panasonic, and NVIDIA for supporting this research. We also would like to thank Sehoon Kim, Muyang Li, and Minkai Xu for their valuable feedback. The design of this project page template references the project pages of Denoised MDPs and a colorful ECCV project.