PhD Defence • Algorithms and Complexity • Geodesic Convex Analysis of Group Scaling for the Paulsen Problem and Tensor Normal Model

Tuesday, October 12, 2021 1:30 pm - 1:30 pm EDT (GMT -04:00)

Please note: This PhD defence will be given online.

Akshay Ramachandran, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Lap Chi Lau

The framework of scaling problems has recently had much interest in the theoretical computer science community due to its variety of applications, from algebraic complexity to machine learning. In this thesis, our main motivation will be two more applications: the Paulsen problem from frame theory, and the tensor normal model in statistical estimation. In order to give new results for these problems, we provide novel convergence analyses for matrix scaling and tensor scaling. Specifically, we will use the framework of geodesic convex optimization presented in Bürgisser et al. and analyze two sufficient conditions (called strong convexity and pseudorandomness) for fast convergence of the natural gradient flow algorithm in this setting. 

In the first half of the thesis, we focus on the Paulsen problem where we are given a set of $n$ vectors in $d$ dimensions that $\eps$-approximately satisfy two balance conditions, and asked whether there is a nearby set of vectors that exactly satisfy those balance conditions. We are able to give optimal distance bounds for the Paulsen problem in both the worst-case and the average-case by improving the smoothed analysis approach of Kwok et al. Specifically, we analyze certain strong convergence conditions for frame scaling, and then show that a random perturbation of the input frame satisfies these conditions and can be scaled to a nearby solution. 

In the second half of the thesis, we study the matrix and tensor normal models, which are a family of Gaussian distributions on tensor data where the covariance matrix respects this tensor product structure. We are able to generalize our scaling results to higher-order tensors and give strong error bounds for the maximum likelihood estimator (MLE) of the tensor normal model when the number of samples is slightly beyond the existence threshold. This result relies on some spectral properties of random Gaussian tensors shown by Pisier. We also give a rigorous analysis of the Flip-Flop algorithm, and show that it converges linearly to the MLE with high probability.


To join this PhD defence on Zoom, please go to https://uwaterloo.zoom.us/j/92646842576?pwd=MDRMQWRRalF0Q0xESkozMGlWR1YzQT09.