PhD Seminar • Data Systems — Kamino: Constraint-Aware Differentially Private Data Synthesis
Please note: This PhD seminar will be given online.
Chang Ge, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Ihab Ilyas
Chang Ge, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Ihab Ilyas
Jay Henderson, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Edward Lank
Jay Henderson, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Edward Lank
Aishwarya Ganesan, Postdoctoral Researcher
VMware Research
The tradeoff between performance and correctness is pervasive across computer systems such as shared-memory multiprocessors, databases, and local file systems. The same tradeoff exists in distributed storage systems as well; designers must often choose consistency or performance but not both. In this talk, I will show how we can build distributed storage systems that provide strong guarantees yet also perform well.
Jumyung “JC” Chang, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Christopher Batty
Nik Unger, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Ian Goldberg
Jiayi Chen, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Urs Hengartner
Kazem Cheshmi, Department of Computer Science
University of Toronto
Sparse matrix computations are an important class of algorithms frequently used in scientific simulations such as computer graphics and weather modeling as well as in data analytics codes and machine learning computations. The performance of these simulations relies heavily on the high-efficient implementations of sparse computations.
Xiaokui Xiao, School of Computing
National University of Singapore
Given a graph G, network embedding maps each node in G into a compact, fixed-dimensional feature vector, which can be used in downstream machine learning tasks. Most of the existing methods for network embedding fail to scale to large graphs with millions of nodes, as they either incur significant computation cost or generate low-quality embeddings on such graphs.
Andre Kassis, PhD candidate
David R. Cheriton School of Computer Science
Supervisor: Professor Urs Hengartner