PhD Defence • Algorithms and Complexity • Geodesic Convex Analysis of Group Scaling for the Paulsen Problem and Tensor Normal ModelExport this event to calendar

Tuesday, October 12, 2021 1:30 PM EDT

Please note: This PhD defence will be given online.

Akshay Ramachandran, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Lap Chi Lau

The framework of scaling problems has recently had much interest in the theoretical computer science community due to its variety of applications, from algebraic complexity to machine learning. In this thesis, our main motivation will be two more applications: the Paulsen problem from frame theory, and the tensor normal model in statistical estimation. In order to give new results for these problems, we provide novel convergence analyses for matrix scaling and tensor scaling. Specifically, we will use the framework of geodesic convex optimization presented in Bürgisser et al. and analyze two sufficient conditions (called strong convexity and pseudorandomness) for fast convergence of the natural gradient flow algorithm in this setting. 

In the first half of the thesis, we focus on the Paulsen problem where we are given a set of $n$ vectors in $d$ dimensions that $\eps$-approximately satisfy two balance conditions, and asked whether there is a nearby set of vectors that exactly satisfy those balance conditions. We are able to give optimal distance bounds for the Paulsen problem in both the worst-case and the average-case by improving the smoothed analysis approach of Kwok et al. Specifically, we analyze certain strong convergence conditions for frame scaling, and then show that a random perturbation of the input frame satisfies these conditions and can be scaled to a nearby solution. 

In the second half of the thesis, we study the matrix and tensor normal models, which are a family of Gaussian distributions on tensor data where the covariance matrix respects this tensor product structure. We are able to generalize our scaling results to higher-order tensors and give strong error bounds for the maximum likelihood estimator (MLE) of the tensor normal model when the number of samples is slightly beyond the existence threshold. This result relies on some spectral properties of random Gaussian tensors shown by Pisier. We also give a rigorous analysis of the Flip-Flop algorithm, and show that it converges linearly to the MLE with high probability.


To join this PhD defence on Zoom, please go to https://uwaterloo.zoom.us/j/92646842576?pwd=MDRMQWRRalF0Q0xESkozMGlWR1YzQT09.

Location 
Online PhD defence
200 University Avenue West

Waterloo, ON N2L 3G1
Canada
Event tags 

S M T W T F S
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
1
2
3
  1. 2024 (100)
    1. April (23)
    2. March (27)
    3. February (25)
    4. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)