Please note: This seminar can be attended either in person or virtually.
Ayush Sekhari, PhD candidate
Computer Science Department, Cornell University
Multi-epoch, small-batch, Stochastic Gradient Descent (SGD) has been the method of choice for learning with large over-parameterized models. A popular theory for explaining why SGD works well in practice is that the algorithm has an implicit regularization that biases its output towards a good solution. Perhaps the theoretically most well understood learning setting for SGD is that of Stochastic Convex Optimization (SCO), where it is well known that SGD learns at the minimax optimal rate.
In this talk, we will consider the problem of SCO and discuss various surprising results on the role of implicit regularization, batch size and multiple epochs for SGD. We will also discuss extensions of these results for the general learning setting, and deep learning.
Bio: Ayush Sekhari is a PhD student in the Computer Science department at Cornell University, advised by Professor Karthik Sridharan and Professor Robert D. Kleinberg. His research interests span across optimization, online learning, reinforcement learning and control, and the interplay between them. Before coming to Cornell, he spent a year at Google as a part of the Brain residency program. Before Google, he completed his undergraduate studies in computer science from IIT Kanpur in India, where he was awarded the President’s gold medal.
To attend this seminar virtually on Zoom, please go to https://uwaterloo.zoom.us/j/94367782579?pwd=K0tHazNqUTljQkFtVENaTFRDcWNaQT09.
To attend this seminar in person in DC 1304, proof of identification, plus proof of vaccination or proof of entitlement to a medical exemption (you can use the provincial QR code, or a copy of a COVID-19 vaccination receipt) required.
Vaccination receipts may be downloaded or printed through the COVID-19 vaccination provincial portal. More information about the campus vaccination requirement.
200 University Avenue West
Waterloo, ON N2L 3G1