PhD Seminar • Machine Learning • Stochastic Forward–Backward Deconvolution: Training Diffusion Models with Finite Noisy Datasets

Monday, April 14, 2025 3:00 pm - 4:00 pm EDT (GMT -04:00)

Please note: This PhD seminar will take place in DC 2568.

Haoye Lu, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Yaoliang Yu

Recent diffusion-based generative models achieve remarkable results by training on massive datasets, yet this practice raises concerns about memorization and copyright infringement. A proposed remedy is to train exclusively on noisy data with potential copyright issues, ensuring the model never observes original content. However, through the lens of deconvolution theory, we show that although it is theoretically feasible to learn the data distribution from noisy samples, the practical challenge of collecting sufficient samples makes successful learning nearly unattainable.

To overcome this limitation, we propose to pretrain the model with a small fraction of clean data to guide the deconvolution process. Combined with our Stochastic Forward--Backward Deconvolution (SFBD) method, we attain an FID of 6.31 on CIFAR-10 with just 4% clean images (and 3.58 with 10%). Theoretically, we prove that SFBD guides the model to learn the true data distribution. The result also highlights the importance of pretraining on limited but clean data or the alternative from similar datasets.  Empirical studies further support these findings and offer additional insights.