Recent PhD graduate Akshay Ramachandran awarded 2022 Cheriton Distinguished Dissertation Award and 2022 Mathematics Doctoral Prize

Tuesday, April 26, 2022

Akshay Ramachandran has received two awards for his doctoral research — the 2022 Cheriton Distinguished Dissertation Award as well as second place in the 2022 Mathematics Doctoral Prize for his thesis titled “Geodesic convex analysis of group scaling for the Paulsen problem and tensor normal model.”

The Cheriton Distinguished Dissertation Award was established in 2019 to recognize excellence in computer science doctoral research at the School of Computer Science. In addition to the recognition, recipients of the award receive a cash prize of $1,000. The Mathematics Doctoral Prize, also conferred annually, recognizes the achievements of the top three graduated doctoral students in the Faculty of Mathematics. As a second-place winner of this award, Akshay will receive a $1,000 prize.

About Akshay Ramachandran and his research

Akshay was a PhD student in the Cheriton School of Computer Science’s Algorithms and Complexity group from August 2016 to October 2021, and advised by Professor Lap Chi Lau.

“Akshay’s thesis is a very significant piece of work,” said Professor Lau. “The problem is important, the result is optimal, the ideas are creative, the techniques are broad and deep, the final result is entirely on his own, and his dedication is phenomenal.”

photo of Akshay Ramachandran and Professor Lap Chi Lau

L to R: Akshay Ramachandran and his PhD advisor, Cheriton School of Computer Science Professor Lap Chi Lau.

Akshay is currently a postdoctoral researcher at the University of Amsterdam and Centrum Wiskunde & Informatica, the national research institute for mathematics and computer science in the Netherlands.

Akshay’s thesis presented two applications of matrix and tensor scaling methods. The framework of scaling problems has stimulated much interest in the theoretical computer science community because it has a variety of applications ranging from algebraic complexity to machine learning. Akshay first considered a basic open problem in frame theory known as the Paulsen problem, named after Pure Mathematics Professor Vern Paulsen, who coincidentally joined Waterloo’s Faculty of Mathematics around the time Akshay had begun his PhD studies.

The Paulsen problem, which had stymied mathematicians for two decades, asks whether any set of n vectors in d-dimension that forms an epsilon-nearly equal norm Parseval frame is close to an exact equal norm Parseval frame. The conjecture is that the distance between an epsilon-nearly equal norm Parseval frame and an exact equal norm Parseval frame is independent on the number of vectors n. This conjecture is the basis of a numerical approach to generate an equal norm Parseval frame, an important object in frame theory with various applications.

Akshay began working on this problem when he started his PhD studies. After more than a year of work, he, Professor Lau and their colleagues, Tsz Chiu Kwok at Waterloo and Yin Tat Lee at the University of Washington, proved that the distance is independent of the number of vectors n. The paper was accepted for presentation at STOC 2018, one of two top conferences in theoretical computer science. Further work on this problem resulted in an additional paper titled Spectral analysis of matrix scaling and operator scaling, which was accepted at FOCS 2019, the other top conference in theoretical computer science.

In the first part of his thesis, Akshay presented a principled framework, based on geodesic convexity, to improve and simplify these two results to obtain the optimal bound for the Paulsen problem. This is a deep result that required significant knowledge and Akshay’s years of effort.

In the second part of his thesis, Akshay considered the tensor normal model, research conducted jointly with his collaborators Professors Cole Franks at MIT, Rafael Oliveira at the University of Waterloo, and Michael Walter at Centrum Wiskunde & Informatica.

The problem they tackled is the following: if given a sample from a D-dimensional Gaussian whose covariance has a tensor structure, can the model be estimated by exploiting the tensor structure? Akshay and his colleagues were able to generalize their improved scaling results to higher-order tensors and give error bounds for the maximum likelihood estimator of the tensor normal model. Their sample complexity bounds are only a single dimension factor greater than necessary for the maximum likelihood estimator to exist at all. They also explained why the well-studied flip-flop algorithm performs well in practice by giving the first rigorous analysis of this algorithm, showing that it converges exponentially to the maximum likelihood estimator with high probability. This work resulted in a paper titled Near optimal sample complexity for matrix and tensor normal models via geodesic convexity.

Akshay is currently a postdoctoral researcher at the University of Amsterdam and Centrum Wiskunde & Informatica, where he collaborates with Professors Daniel Dadush and Michael Walter.

  1. 2024 (21)
    1. March (12)
    2. February (1)
    3. January (8)
  2. 2023 (70)
    1. December (6)
    2. November (7)
    3. October (7)
    4. September (2)
    5. August (3)
    6. July (7)
    7. June (8)
    8. May (9)
    9. April (6)
    10. March (7)
    11. February (4)
    12. January (4)
  3. 2022 (63)
    1. December (2)
    2. November (7)
    3. October (6)
    4. September (6)
    5. August (1)
    6. July (3)
    7. June (7)
    8. May (8)
    9. April (7)
    10. March (6)
    11. February (6)
    12. January (4)
  4. 2021 (64)
  5. 2020 (73)
  6. 2019 (90)
  7. 2018 (82)
  8. 2017 (50)
  9. 2016 (27)
  10. 2015 (41)
  11. 2014 (32)
  12. 2013 (46)
  13. 2012 (17)
  14. 2011 (20)