Numerical Optimization

A Flexible Coordinate Descent Method

We present a novel randomized block coordinate descent method for the minimization of a convex composite objective function. The method uses (approximate) partial second-order (curvature) information, so that the algorithm performance is more robust …

Avoiding Synchronization in First-Order Methods for Sparse Convex Optimization

Parallel computing has played an important role in speeding up convex optimization methods for big data analytics and large-scale machine learning (ML). However, the scalability of these optimization methods is inhibited by the cost of communicating …

Variational Perspective on Local Graph Clustering

Modern graph clustering applications require the analysis of large graphs and this can be computationally expensive. In this regard, local spectral graph clustering methods aim to identify well-connected clusters around a given 'seed set' of …

Avoiding communication in primal and dual block coordinate descent methods

Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine …

Performance of First- and Second-Order Methods for L1-Regularized Least Squares Problems

We study the performance of first- and second-order optimization methods for l1-regularized sparse least-squares problems as the conditioning of the problem changes and the dimensions of the problem increase up to one trillion. A rigorously defined …

A Second-Order Method for Strongly-Convex L1-Regularization Problems

In this paper a robust second-order method is developed for the solution of strongly convex l1-regularized problems. The main aim is to make the proposed method as inexpensive as possible, while even difficult problems can be efficiently solved. The …

A Preconditioner for a Primal-dual Newton Conjugate Gradients Method for Compressed Sensing Problems

In this paper we are concerned with the solution of compressed sensing (CS) problems where the signals to be recovered are sparse in coherent and redundant dictionaries. We extend the primal-dual Newton Conjugate Gradient method (pdNCG) in [T. F. …

Matrix-free interior point method for compressed sensing problems

We consider a class of optimization problems for sparse signal reconstruction which arise in the field of compressed sensing (CS). A plethora of approaches and solvers exist for such problems, for example GPSR, FPC AS, SPGL1, NestA, l1-ls, PDCO to …