Abstract
The representer theorem assures that kernel methods retain optimality under penalized empirical risk minimization. While a su cient condition on the form of the regularizer guaranteeing the representer theorem has been known since the initial development of kernel methods, necessary conditions have only been investigated recently. In this paper we completely characterize the necessary and su cient conditions on the regularizer that ensure the representer theorem holds. The results are surprisingly simple yet broaden the conditions where the representer theorem is known to hold. Extension to the matrix domain is also addressed.
Type
Publication
International Conference on Machine Learning (ICML)