Cheriton School of Computer Science students receive 2022 CRA Outstanding Undergraduate Researcher Awards

Monday, January 10, 2022

Four students at the Cheriton School of Computer Science are recipients of the Computing Research Association’s 2022 Outstanding Undergraduate Researcher Awards. The annual CRA awards program recognizes undergraduate students from universities across North America who have distinguished themselves by conducting exceptional research in an area of computer science.

Nicholas Vadivelu was one of four undergraduates in the runner-up category, Kelvin Jiang was among the finalists, and Sourav Biswas and Vikram Subramanian each received an honourable mention for their research.

“Congratulations to Nicholas, Kelvin, Sourav and Vikram on receiving these recognitions from the Computing Research Association,” said Raouf Boutaba, Professor and Director of the Cheriton School of Computer Science. “It’s wonderful that four of our undergraduates have been recognized among the top computer science research-oriented students attending universities across Canada and the United States.”

The Computing Research Association is supported by Microsoft Research and Mitsubishi Electric Research Labs, which sponsor the Outstanding Undergraduate Researcher Award program in alternate years. This year, Mitsubishi Electric Research Labs sponsored CRA’s Outstanding Undergraduate Researcher Awards.

About Nicholas Vadivelu’s research

photo of Nicholas VadiveluNicholas Vadivelu is a computer science student who conducts research with Professor Gautam Kamath on differential privacy, a rigorous notion of data privacy that limits the effects of adding or deleting a user’s data from a dataset. Such privacy is critically important when processing sensitive user data, an individual’s personal and confidential digital information that can range from employment history to financial transactions to medical records.

Nicholas contributed significantly to differentially private machine learning by employing frameworks that dramatically increase the speed at which Differentially Private Stochastic Gradient Descent — or DPSGD — is executed. DPSGD is the main algorithm for training differentially private models, ones that operate on a dataset but do not allow any individual data point within them to be identified. DPSGD, however, is slow and memory intensive, which has impeded scale-up and limited its use in practice. 

Working with master’s student Pranav Subramani and Professor Kamath, Nicholas co-led a project to accelerate DPSGD. His key contribution was explicit vectorization and just-in-time compilation to create a state-of-the-art implementation that was up to 50 times faster than the best alternatives. Nicholas then integrated this implementation in two open-source libraries to make his contributions broadly available. 

Nicholas is a co-first author of a paper titled Enabling Fast Differentially Private SGD via Just-in-Time Compilation and Vectorization, which was accepted to NeurIPS 2021, the premiere conference on machine learning. Since the paper was published, differential privacy researchers have commented on the usefulness and importance of the result. His contribution to differential privacy has already influenced the practitioner community and it has the potential to impact the practice of private machine learning in the years to come.

During an earlier internship at Uber ATG, Nicholas was an integral member of a research team that developed an end-to-end learned neural reasoning framework for self-driving vehicles. Autonomous vehicles learn from shared sensory information, which makes them safer to operate, but sharing information across nearby self-driving vehicles can also make them susceptible to erroneous messages they might receive. 

During his internship at Uber ATG on a project conducted with University of Toronto doctoral candidate Mengye Ren and Professor Raquel Urtasun, Nicholas developed a novel end-to-end learned neural reasoning framework that learns to estimate localization errors, communicate those errors to other vehicles, and reach a consensus about the errors. This framework has allowed a state-of-the-art collaborative self-driving neural network — called V2VNet — to perform detection and forecasting with almost no decline in performance in the presence of noise levels much greater than those typically encountered in real self-driving vehicle systems. Nicholas’s research was published in a paper titled Learning to Communicate and Correct Pose Errors, and presented at CoRL 2020, the 4th Annual Conference on Robot Learning.

In addition to his research activities, Nicholas leads Waterloo’s Data Science Club and mentors junior students through the Tech+ Mentorship organization.

About Kelvin Jiang’s research

photo of Kelvin JiangKelvin Jiang is a computer science student who worked as an undergraduate research assistant in Professor Jimmy Lin’s research group. 

Kelvin’s research investigated techniques for fact verification. With the large amount of information being generated daily on the Internet — particularly on social media — it has become neigh unto impossible to factually verify all content manually, which has motivated research into automated fact verification. Kelvin’s work focused on a benchmark called the Fact Extraction and VERification — or FEVER — dataset that is used widely by the natural language processing research community.

Kelvin’s approach leveraged pretrained sequence-to-sequence transformer models for sentence selection and label prediction, two key sub-tasks in fact verification. The work took advantage of the transformer-based model T5, using a listwise approach coupled with data augmentation. With this enhancement, the label prediction stage is more robust to noise and capable of verifying complex claims by jointly reasoning over multiple pieces of evidence. At the time, this approach took the top spot at the competitive FEVER leaderboard, representing the best-known system for this task. Kelvin was a first author of a research paper on this work, titled Exploring Listwise Evidence Reasoning with T5 for Fact Verification, which was accepted for publication at ACL 2021, a top-tier natural language processing conference.

About Sourav Biswas’s research

photo of Sourav BiswasSourav Biswas is an undergraduate computer science student who works with Professor Gautam Kamath, whose primary research interests are in differentially private machine learning, a theoretical framework to limit what can be deduced about data given a model trained on that data.

A recent focus of Professor Kamath’s research is on strengthening theory for differentially private mean estimation as well as creating practically useful tools for private estimation. The research to which Sourav contributed extensively used a new method based on shrinking confidence sets, which scale easily to multiple dimensions. The approach is practical and works in more complicated settings, including covariance estimation and principal component analysis. This research resulted in a paper titled CoinPress: Practical Private Mean and Covariance Estimation, which was accepted to NeurIPS 2020, the premiere conference in machine learning.

Sourav has also published a paper titled MuSCLE: Multi Sweep Compression of LiDAR using Deep Entropy Models based on research he conducted during a four-month internship at Uber ATG, under the guidance of Professor Raquel Urtasun. He was first author on the paper that focuses on compression for LiDAR sensor data streams, an important component in autonomous vehicle systems. 

About Vikram Subramanian’s research

photo of Vikram SubramanianVikram Subramanian is an undergraduate software engineering student who works with Professor Mei Nagappan on three research projects — the SE Garage, first commits on GitHub, and Apply+.

Vikram began research with Professor Nagappan two years ago on the SE Garage, a curated archive for software engineering tools that makes finding and downloading tools from past research easier. Vikram started with a collection of tools from hundreds of papers, painstakingly detailed work because of the amount of manual effort involved. Despite the time needed to clean and prepare the data, Vikram uploaded tools from more than 50 papers to create a prototype. He is now working with Professor Nagappan’s collaborators in Europe to automate this process and scale up the collection of tools. Ultimately, the SE Garage will act as an app store for research tools to help developers build better software.

The popularity of open-source software has never been greater, but for it to remain so new developers must join the open-source software community and contribute to its projects. Vikram has conducted research that identifies the first commits developers new to open-source projects make to better understand what kinds of tasks they can take on. Importantly, the research also helps moderators of open-source software projects to better understand which tasks are most appealing or useful for first-time contributors to attempt. Vikram was awarded first place at the ACM student research competition at ICSE 2020 for this research and the full version of his paper, titled Analyzing First Contributions on GitHub: What Do Newcomers Do?, has been published in IEEE Software.

Most recently, Vikram has worked with Professor Nagappan, three other undergraduate students, and industrial collaborators at BlackBerry to build a tool that helps manage software patches. When companies fork and adapt open-source projects in their products, they need to apply security patches from the original project to the fork. Vikram and his undergraduate colleagues built an extended version of the ‘git apply’ tool to automatically apply patches that can be applied without changes, and then help developers identify where and what changes need to be made for other patches. This work resulted in a research paper titled Apply+: A Tool to Intelligently Apply Security Patches, on which Vikram is the lead author, as well as an open-source tool that BlackBerry is evaluating.

  1. 2022 (4)
    1. January (4)
  2. 2021 (64)
    1. December (2)
    2. November (6)
    3. October (6)
    4. September (4)
    5. August (7)
    6. July (4)
    7. June (8)
    8. May (9)
    9. April (3)
    10. March (8)
    11. February (4)
    12. January (3)
  3. 2020 (73)
    1. December (7)
    2. November (6)
    3. October (4)
    4. September (5)
    5. August (4)
    6. July (7)
    7. June (4)
    8. May (11)
    9. April (13)
    10. March (3)
    11. February (3)
    12. January (6)
  4. 2019 (90)
  5. 2018 (82)
  6. 2017 (50)
  7. 2016 (27)
  8. 2015 (41)
  9. 2014 (32)
  10. 2013 (46)
  11. 2012 (17)
  12. 2011 (20)