Research Interests
My research focuses on trustworthy machine learning, specifically examining how external training data affects model performance and robustness. This includes studying data poisoning attacks, the impact of problematic training data (e.g., disguised copyrighted material), and developing machine unlearning techniques to mitigate their effects.
More generally, I am interested in building a trustworthy machine learning pipeline, spanning from data to training procedures and models. This includes topics in (but is not limited to) memorization, data attribution, neural network quantization, self-supervised learning, diffusion training, and tools in non-convex optimization.
|
|
Alignment Calibration: Machine Unlearning for Contrastive Learning under Auditing
Yihan Wang*,
Yiwei Lu*,
Guojun Zhang,
Franziska Boenisch,
Adam Dziedzic,
Yaoliang Yu,
Xiaoshan Gao
ICML 2024 NextGenAISafety Workshop  (Oral)
arXiv
We address machine unlearning for contrastive learning pretraining schemes via a novel method called Alignment Calibration. We also
propose new auditing tools for data owners to easily validate the effect of unlearning.
|
|
Machine Unlearning Fails to Remove Data Poisoning Attacks
Martin Pawelczyk,
Jimmy Z. Di,
Yiwei Lu,
Gautam Kamath,
Ayush Sekhari,
Seth Neel
ICML 2024 Generative AI and Law Workshop  (Spotlight)
arXiv
We find that current approaches for machine unlearning (MUL) are not effective at removing the effect of data poisoning attacks.
|
|
On the Robustness of Neural Networks Quantization against Data Poisoning Attacks
Yiwei Lu,
Yihan Wang,
Guojun Zhang,
Yaoliang Yu,
ICML 2024 NextGenAISafety Workshop
paper
We find that neural network quantization offers improved robustness against different data poisoning attacks.
|
|
Disguised Copyright Infringement of Latent Diffusion Models
Yiwei Lu*,
Matthew Y.R. Yang*,
Zuoqiu Liu*,
Gautam Kamath,
Yaoliang Yu
ICML 2024 / ICML 2024 Generative AI and Law Workshop
arXiv
We reveal the threat of disguised copyright infringement of latent diffusion models,
where one constructs a disguise that looks drastically different from the copyrighted sample yet still induces the effect of training Latent Diffusion Models on it.
|
|
Indiscriminate Data Poisoning Attacks on Pre-trained Feature Extractors
Yiwei Lu,
Matthew Y.R. Yang,
Gautam Kamath,
Yaoliang Yu
IEEE SaTML 2024
arXiv
We study indiscriminate data poisoning attacks against pre-trained feature extractors for fine-tuning and transfer learning tasks
and propose feature targeted attacks to address optimization difficulty under constraints.
|
|
Understanding Neural Network Binarization with Forward and Backward Proximal Quantizers
Yiwei Lu,
Yaoliang Yu,
Xinlin Li,
Vahid Partovi Nia
NeurIPS 2023
paper
We propose forward backward proximal quantizers for understanding approximate gradients in neural network quantization and provide a
a new tool for designing new algorithms.
|
|
f-MICL: Understanding and Generalizing InfoNCE-based Contrastive Learning
Yiwei Lu*,
Guojun Zhang*,
Sun Sun,
Hongyu Guo,
Yaoliang Yu
Transactions on Machine Learning Research
paper
We propose a general and novel loss function on contrastive learning based on f-mutual information. Additionally, we propose a f-Gaussain similarity funcntion with better
interpretability and empirical performance.
|
|
CM-GAN: Stablizing GAN Training with Consistency Models
Haoye Lu,
Yiwei Lu,
Dihong Jiang,
Spencer Ryan Szabados,
Sun Sun,
Yaoliang Yu
ICML 2023 Wrokshop on Structured Probabilistic Inference & Generative Modeling
paper
We propose CM-GAN by combining the main strengths of diffusions and GANs while mitigating their major drawbacks.
|
|
Exploring the Limits of Model-Targeted Indiscriminate Data Poisoning Attacks
Yiwei Lu,
Gautam Kamath,
Yaoliang Yu
ICML , 2023
paper
We find (1) existing indiscriminate attacks are not well-designed (or optimized), and we reduce the performance gap with a new attack; (2) there exists some intrinsic barriers of data poisoning attacks, namely that when the poisoning fraction is smaller than a (easy to calculate) threshold, no attack succeeds.
|
|
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu,
Gautam Kamath,
Yaoliang Yu
Transactions on Machine Learning Research (also appeared in NeurIPS 2022 ML Safety Workshop and Trustworthy and Socially Responsible Machine Learning (TSRML) Workshop)
paper/
code
We find that neural networks are surprisingly hard to (indiscriminate) poison and give better attacks.
|
|
f-mutual Information Contrastive Learning
Guojun Zhang*,
Yiwei Lu*,
Sun Sun,
Hongyu Guo,
Yaoliang Yu
NeurIPS 2021 workshop on self-supervised learning   (Contributed Talk)
paper/
poster/
talk
We propose a general and novel loss function on contrastive learning based on f-mutual information.
|
|
AdaCrowd: Unlabeled Scene Adaptation for Crowd Counting
Mahesh Kumar Krishna Reddy,
Mrigank Rochan,
Yiwei Lu,
Yang Wang
IEEE Transactions on Multimedia (TMM), 2021  
arXiv /
code
We propose a new problem called unlabeled scene adaptive crowd counting.
|
|
Few-shot Scene-adaptive Anomaly Detection
Yiwei Lu,
Frank Yu,
Mahesh Kumar Krishna Reddy,
Yang Wang
ECCV, 2020   (Spotlight)
arXiv /
code
We propose a more realistic problem setting for anomaly detection in surveillance videos and solve it using a meta-learning based algorithm.
|
|
Structure Learning with Similarity Preserving
Zhao Kang,
Xiao Lu,
Yiwei Lu,
Chong Peng,
Wenyu Chen,
Zenglin Xu
Neural Networks, 2020
arXiv
We propose a structure learning framework that retains the pairwise similarities between the data points.
|
|
Future Frame Prediction Using Convolutional VRNN for Anomaly Detection
Yiwei Lu,
Mahesh Kumar Krishna Reddy,
Seyed shahabeddin Nabavi,
Yang Wang
AVSS, 2019
arXiv /
code
We propose a novel sequential generative model based on variational autoencoder (VAE) for future frame prediction with convolutional LSTM (ConvLSTM).
|
|
Similarity Learning via Kernel Preserving Embedding
Zhao Kang,
Yiwei Lu,
Yuanzhang Su,
Changsheng Li,
Zenglin Xu
AAAI, 2019
PDF
We propose a novel similarity learning framework by minimizing the reconstruction error of kernel matrices, rather than the reconstruction error of original data adopted by existing work.
|
|
Semantic Segmentation in Compressed Videos
Yiwei Lu*,
Ang Li*,
Yang Wang
MMSP, 2019
PDF
We propose a ConvLSTM-based model to perform semantic segmentation on compressed videos directly. This significantly speed up the training and test speed.
|
|