Optimization for Data Science
-
Instructor: Yao-Liang Yu
-
Email: yaoliang.yu@uwaterloo.ca
Lectures (Tentative)
| Topic | Slides | Notes | ||
| 00 | Introduction | |||
| 01 | Gradient Descent | |||
| 02 | Proximal Gradient | proj, prox | proj, prox | |
| 03 | Conditional Gradient | |||
| 04 | Subgradient | |||
| 05 | Mirror Descent | |||
| 06 | Metric Gradient | |||
| 07 | Accelerated Gradient | |||
| 08 | Minimax | |||
| 09 | Alternating | |||
| 10 | POCS | |||
| 11 | Splitting | |||
| 12 | Stochastic Gradient | |||
| 13 | Variance Reduction | |||
| 14 | Randomized Smoothing | |||
| 15 | ||||
| 16 | Newton | |||
| 17 | Riemannian Gradient | |||
| 18 | Adaptation | |||
| 19 | Perf Estimation | |||
Project
Project template: download zip; main file: project.tex; references: readings.bib
Textbook
There is no required textbook, but the following fine texts are recommended.
- Stephen J. Wright and Benjamin Recht. Optimization for Data Analysis. Cambridge University Press, 2022.
- Ernest K. Ryu and Wotao Yin. Large-Scale Convex Optimization. Cambridge University Press, 2023.
- Yurii E. Nesterov. Lectures on Convex Optimization. Springer, 2018.
- Boris T. Polyak. Introduction to Optimization. Optimization Software, 1987.