Please note: This PhD seminar will take place in DC 2314 and online.
Ege Ciklabakkal, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Toshiya Hachisuka
Random sampling is central to Monte Carlo (MC) methods. Beyond sample size, the accuracy of the MC estimator depends on how uniformly the samples cover the integration domain. Discrepancy measures this uniformity as the deviation between a point set’s empirical distribution and the Lebesgue measure on the unit hypercube. Low-discrepancy sets cover the domain more evenly, enabling Quasi-Monte Carlo estimators with smaller errors bounds and faster convergence.
Traditionally, low-discrepancy points are obtained from number-theoretic constructions. More recently, Message-Passing Monte Carlo (MPMC) uses a graph-based machine-learning model to directly minimize L2 discrepancy, producing point sets with smaller discrepancy than classical constructions. MPMC yields optimal or near-optimal discrepancy in low dimensions for small sample sizes, and it extends straightforwardly to higher dimensions, where dimensions can be weighted to emphasize uniformity in those most relevant to the estimator. Follow-up studies generalize MPMC to point sequence generation and address pathologies of the L2 discrepancy objective. This talk surveys these developments and reports experimental findings on directly optimizing L2 discrepancy using MC.
To attend this PhD seminar in person, please go to DC 2314. You can also attend virtually on Zoom.