We present a novel randomized block coordinate descent method for the minimization of a convex composite objective function. The method uses (approximate) partial second-order (curvature) information, so that the algorithm performance is more robust …
Parallel computing has played an important role in speeding up convex optimization methods for big data analytics and large-scale machine learning (ML). However, the scalability of these optimization methods is inhibited by the cost of communicating …
Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine …
We study the performance of first- and second-order optimization methods for l1-regularized sparse least-squares problems as the conditioning of the problem changes and the dimensions of the problem increase up to one trillion. A rigorously defined …
In this paper a robust second-order method is developed for the solution of strongly convex l1-regularized problems. The main aim is to make the proposed method as inexpensive as possible, while even difficult problems can be efficiently solved. The …
In this paper we are concerned with the solution of compressed sensing (CS) problems where the signals to be recovered are sparse in coherent and redundant dictionaries. We extend the primal-dual Newton Conjugate Gradient method (pdNCG) in [T. F. …