Please note: This seminar will be take place in DC 1304 and online.
Alexander Terenin, Postdoctoral Research Associate
Sidney Sussex College, University of Cambridge
In Gaussian processes, conditioning and computation of posterior distributions is usually done in a distributional fashion by working with finite-dimensional marginals. However, there is another way to think about conditioning: using actual random functions rather than their probability distributions. This perspective is particularly helpful in decision-theoretic settings such as Bayesian optimization, where it enables efficient computation of a wider class of acquisition functions than otherwise possible. In this talk, we describe these recent advances, and discuss their broader implications to Bayesian modeling.
Bio: Alexander Terenin is a Postdoctoral Research Associate at the University of Cambridge. He is interested in statistical machine learning, particularly in settings where the data is not fixed, but is gathered interactively by the learning machine. This leads naturally to Gaussian processes and data-efficient interactive decision-making systems such as Bayesian optimization, to areas such as multi-armed bandits and reinforcement learning, and to techniques for incorporating inductive biases and prior information such as symmetries into machine learning models.
To join this seminar on Zoom, please go to https://uwaterloo.zoom.us/j/91756150722?pwd=NzZxb0NTNWpLSjB6OEJjdVBWUjc5dz09.
200 University Avenue West
Waterloo, ON N2L 3G1