Yiwei Lu

I am a final year Ph.D. student in the David R. Cheriton School of Computer Science at the University of Waterloo, where I am supervised by Prof. Yaoliang Yu and Dr. Sun Sun. I am also a student affiliate of the Vector Instituite and a research affliate of The Salon with Prof. Gautam Kamath.

Previously, I have completed my M.Sc. in Computer Science at the University of Manitoba, where I was advised by Prof. Yang Wang. I did my bachelors at the University of Electronic Science and Technology of China. I was also an exchange student at UC Santa Barbara.

I am currently on the academic job market for faculty positions.

Email  /  CV  /  Google Scholar  /  Github  /  LinkedIn

profile photo
News

  New! July 2024: New paper on arXiv: Machine Unlearning Fails to Remove Data Poisoning Attacks.
  New! June 2024: New paper on arXiv: Alignment Calibration: Machine Unlearning for Contrastive Learning under Auditing.
  May 2024: One paper accepted to ICML 2024: Disguised Copyright Infringement of Latent Diffusion Models.
  Feb 2024: One paper accepted to SaTML 2024 (presented on April 11th).
  Jan 2024: I am a winner of the Cheriton Scholarship.
  Dec 2023: I am selected as a top reviewer in NeurIPS 2023.
  Oct 2023:One paper accepted to NeurIPS 2023.
  Oct 2023:One paper accepted to TMLR.
  May 2023: One paper accepted to ICML 2023.
  Dec 2022: One paper accepted to TMLR.
  Feb 2022: I am joining Huawei Noah's Ark Lab in Montreal as an Intern.
  January 2021: I am joining National Research Council of Canada as an Intern
  August 2020: I am joining University of Waterloo and Vector Instituite as a Ph.D. student

Research Interests

My research focuses on trustworthy machine learning, specifically examining how external training data affects model performance and robustness. This includes studying data poisoning attacks, the impact of problematic training data (e.g., disguised copyrighted material), and developing machine unlearning techniques to mitigate their effects.

More generally, I am interested in building a trustworthy machine learning pipeline, spanning from data to training procedures and models. This includes topics in (but is not limited to) memorization, data attribution, neural network quantization, self-supervised learning, diffusion training, and tools in non-convex optimization.

Selected Publications
profile photo Alignment Calibration: Machine Unlearning for Contrastive Learning under Auditing
Yihan Wang*, Yiwei Lu*, Guojun Zhang, Franziska Boenisch, Adam Dziedzic, Yaoliang Yu, Xiaoshan Gao
ICML 2024 NextGenAISafety Workshop  (Oral)
arXiv

We address machine unlearning for contrastive learning pretraining schemes via a novel method called Alignment Calibration. We also propose new auditing tools for data owners to easily validate the effect of unlearning.

profile photo Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk, Jimmy Z. Di, Yiwei Lu, Gautam Kamath, Ayush Sekhari, Seth Neel
ICML 2024 Generative AI and Law Workshop  (Spotlight)
arXiv

We find that current approaches for machine unlearning (MUL) are not effective at removing the effect of data poisoning attacks.

profile photo On the Robustness of Neural Networks Quantization against Data Poisoning Attacks
Yiwei Lu, Yihan Wang, Guojun Zhang, Yaoliang Yu,
ICML 2024 NextGenAISafety Workshop
paper

We find that neural network quantization offers improved robustness against different data poisoning attacks.

profile photo Disguised Copyright Infringement of Latent Diffusion Models
Yiwei Lu*, Matthew Y.R. Yang*, Zuoqiu Liu*, Gautam Kamath, Yaoliang Yu
ICML 2024 / ICML 2024 Generative AI and Law Workshop
arXiv

We reveal the threat of disguised copyright infringement of latent diffusion models, where one constructs a disguise that looks drastically different from the copyrighted sample yet still induces the effect of training Latent Diffusion Models on it.

profile photo Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
Yiwei Lu, Matthew Y.R. Yang, Gautam Kamath, Yaoliang Yu
IEEE SaTML 2024
arXiv

We study indiscriminate data poisoning attacks against pre-trained feature extractors for fine-tuning and transfer learning tasks and propose feature targeted attacks to address optimization difficulty under constraints.

profile photo Understanding Neural Network Binarization with Forward and Backward Proximal Quantizers
Yiwei Lu, Yaoliang Yu, Xinlin Li, Vahid Partovi Nia
NeurIPS 2023
paper

We propose forward backward proximal quantizers for understanding approximate gradients in neural network quantization and provide a a new tool for designing new algorithms.

profile photo f-MICL: Understanding and Generalizing InfoNCE-based Contrastive Learning
Yiwei Lu*, Guojun Zhang*, Sun Sun, Hongyu Guo, Yaoliang Yu
Transactions on Machine Learning Research
paper

We propose a general and novel loss function on contrastive learning based on f-mutual information. Additionally, we propose a f-Gaussain similarity funcntion with better interpretability and empirical performance.

profile photo CM-GAN: Stablizing GAN Training with Consistency Models
Haoye Lu, Yiwei Lu, Dihong Jiang, Spencer Ryan Szabados, Sun Sun, Yaoliang Yu
ICML 2023 Wrokshop on Structured Probabilistic Inference & Generative Modeling
paper

We propose CM-GAN by combining the main strengths of diffusions and GANs while mitigating their major drawbacks.

profile photo Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks
Yiwei Lu, Gautam Kamath, Yaoliang Yu
ICML , 2023
paper

We find (1) existing indiscriminate attacks are not well-designed (or optimized), and we reduce the performance gap with a new attack; (2) there exists some intrinsic barriers of data poisoning attacks, namely that when the poisoning fraction is smaller than a (easy to calculate) threshold, no attack succeeds.

profile photo Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu, Gautam Kamath, Yaoliang Yu
Transactions on Machine Learning Research (also appeared in NeurIPS 2022 ML Safety Workshop and Trustworthy and Socially Responsible Machine Learning (TSRML) Workshop)
paper/ code

We find that neural networks are surprisingly hard to (indiscriminate) poison and give better attacks.

profile photo f-mutual Information Contrastive Learning
Guojun Zhang*, Yiwei Lu*, Sun Sun, Hongyu Guo, Yaoliang Yu
NeurIPS 2021 workshop on self-supervised learning   (Contributed Talk)
paper/ poster/ talk

We propose a general and novel loss function on contrastive learning based on f-mutual information.

profile photo AdaCrowd: Unlabeled Scene Adaptation for Crowd Counting
Mahesh Kumar Krishna Reddy, Mrigank Rochan, Yiwei Lu, Yang Wang
IEEE Transactions on Multimedia (TMM), 2021  
arXiv / code

We propose a new problem called unlabeled scene adaptive crowd counting.

profile photo Few-shot Scene-adaptive Anomaly Detection
Yiwei Lu, Frank Yu, Mahesh Kumar Krishna Reddy, Yang Wang
ECCV, 2020   (Spotlight)
arXiv / code

We propose a more realistic problem setting for anomaly detection in surveillance videos and solve it using a meta-learning based algorithm.

profile photo Structure Learning with Similarity Preserving
Zhao Kang, Xiao Lu, Yiwei Lu, Chong Peng, Wenyu Chen, Zenglin Xu
Neural Networks, 2020
arXiv

We propose a structure learning framework that retains the pairwise similarities between the data points.

profile photo Future Frame Prediction Using Convolutional VRNN for Anomaly Detection
Yiwei Lu, Mahesh Kumar Krishna Reddy, Seyed shahabeddin Nabavi, Yang Wang
AVSS, 2019
arXiv / code

We propose a novel sequential generative model based on variational autoencoder (VAE) for future frame prediction with convolutional LSTM (ConvLSTM).

profile photo Similarity Learning via Kernel Preserving Embedding
Zhao Kang, Yiwei Lu, Yuanzhang Su, Changsheng Li, Zenglin Xu
AAAI, 2019
PDF

We propose a novel similarity learning framework by minimizing the reconstruction error of kernel matrices, rather than the reconstruction error of original data adopted by existing work.

profile photo Semantic Segmentation in Compressed Videos
Yiwei Lu*, Ang Li*, Yang Wang
MMSP, 2019
PDF

We propose a ConvLSTM-based model to perform semantic segmentation on compressed videos directly. This significantly speed up the training and test speed.

Thesis

Anomaly Detection in Surveillance Videos using Deep Learning - Yiwei Lu, M.Sc.thesis, Department of Computer Science, University of Manitoba, June 2020.


Credits to Jon Barron for the website design.