Newton-type Methods for Minimax Optimization

Jul 24, 2021·
G. Zhang
,
K. Wu
,
P. Poupart
,
Y. Yu
· 0 min read
Abstract
Minimax optimization has been an important modeling tool in applied science and received renewed interest in machine learning due to many recent applications, such as adversarial training and generative models. However, existing theory mostly focuses on convex-concave functions with few exceptions. In this work, we propose two Newton-type algorithms for nonconvex-nonconcave minimax optimization. We prove their local convergence at strict local minimax points, which are surrogates of global solutions. Our Newton-type algorithms nicely complement existing ones in that (a) they converge faster to strict local minimax points; (b) they are much more effective when the problem is ill-conditioned; (c) their computational complexity remains similar. We verify the effectiveness of our Newton-type algorithms through experiments on training GANs which are intrinsically nonconvex and ill-conditioned.
Type
Publication
ICML Workshop on Beyond First-Order Methods in ML Systems