Carl
Olsson,
Mathematical
Imaging
Group
Centre
for
Mathematical
Sciences
Lund
Institute
of
Technology,
Lund
University
Rank priors are frequently employed for regularizing ill posed linear inverse problems. Since they are both discontinuous and non-convex they are often replaced with the nuclear norm. While the resulting formulation is easy to optimize it is also known to suffer from a shrinking bias that can severely degrade the solution in the presence of noise.
In this talk, we present a class of alternative non-convex regularization terms that do not suffer from the same bias. We show that if a restricted isometry property holds then there is typically only one low-rank stationary point. In order to derive an efficient inference algorithm we show that when using a bilinear parameterization our regularization term can be well approximated with a quadratic function which opens up the possibility to use second order methods such as Levenberg-Marquardt or Variable Projections. We show on several real datasets that our approach outperforms current methods both in terms of relaxation quality and convergence speed.