Convex Two-Layer Modeling with Latent Structure

Dec 1, 2016·
V. Ganapathiraman
,
X. Zhang
,
Y. Yu
,
J. Wen
· 0 min read
Abstract
Unsupervised learning of structured predictors has been a long standing pursuit in machine learning. Recently a conditional random field auto-encoder has been proposed in a two-layer setting, allowing latent structured representation to be automatically inferred. Aside from being nonconvex, it also requires the demanding inference of normalization. In this paper, we develop a convex relaxation of two-layer conditional model which captures latent structure and estimates model parameters, jointly and optimally. We further expand its applicability by resorting to a weaker form of inference—maximum a-posteriori. The flexibility of the model is demonstrated on two structures based on total unimodularity—graph matching and linear chain. Experimental results confirm the promise of the method.
Type
Publication
Advances in Neural Information Processing Systems (NeurIPS)