Please note: This PhD defence will take place in DC 2314 and online.
Weijie Zhou, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Toshiya Hachisuka
In realistic image synthesis, Monte Carlo integration is the foundation of most rendering algorithms, but it inevitably introduces noise. To reduce such noise, advanced sampling strategies—such as Markov chain Monte Carlo (MCMC), resampled importance sampling, and modern denoising techniques—have been proposed. However, these methods often introduce correlations that can manifest as new artifacts. This thesis investigates three distinct research directions, spanning from mitigating correlation to actively exploiting it.
The first direction tackles correlation in MCMC methods. Traditional MCMC often suffers from low acceptance rates, producing visually “spiky” noise. We propose combining MCMC with path guiding techniques to improve acceptance probabilities, thereby reducing error and improving image quality.
The second direction addresses correlation artifacts in the widely used ReSTIR (Reservoir-based Spatiotemporal Importance Resampling) algorithm. While ReSTIR achieves efficient sampling by reusing samples across pixels and frames, this reuse can lead to blotchy artifacts, as many pixels may end up sharing only a few important samples. Observing parallels between ReSTIR and MCMC, we introduce a new spatiotemporal MCMC framework that replaces reservoir resampling. Applied to both direct illumination and path tracing, our approach significantly reduces correlation artifacts while retaining efficiency.
The final direction shifts from reducing correlation to exploiting it. We present a generalized combination framework that leverages spatial, temporal, and multiscale correlations to reduce error. This method enables robust cross-domain fusion, effectively suppressing systematic artifacts and improving temporal coherence—particularly crucial in animation. Through extensive experiments, we demonstrate that our framework enhances temporal stability, visual fidelity, and residual error reduction across diverse rendering scenarios.
To attend this PhD defence in person, please go to DC 2314. You can also attend virtually on Zoom.