Group  
Home  
Freda Shi
  石昊悦

Greetings! I am an Assistant Professor in the David R. Cheriton School of Computer Science at the University of Waterloo and a Faculty Member at the Vector Institute, where I also hold a Canada CIFAR AI Chair. I received my Ph.D. in Computer Science from the Toyota Technological Institute at Chicago in 2024, where I was advised by Professors Karen Livescu and Kevin Gimpel, and was supported by a Google Ph.D. Fellowship. I completed my Bachelor's degree in Intelligence Science and Technology (Computer Science Track) in 2018 at Peking University, with a minor in Sociology.

Research

My research interests are in computational linguistics and natural language processing. I work towards deeper understandings of natural language and the human language processing mechanism, as well as how these insights can inform the design of more efficient, effective, safe, and trustworthy NLP systems. Among all relevant topics, I am particularly interested in learning language through grounding, computational multilingualism, and related machine learning aspects. For more details, check out my publications and the CompLING Lab at the University of Waterloo.

Prospective students and visitors: please read this.

Publications show selected / show all by date / show all by topic

Topics: Syntax / Semantics / Multilingualism / Others / Theses (*: Equal Contribution)

Do Vision-Language Models Represent Space and How? Evaluating Spatial Frame of Reference Under Ambiguities
Zheyuan Zhang*, Fengyuan Hu*, Jayjun Lee*, Freda Shi, Parisa Kordjamshidi, Joyce Chai, Ziqiao Ma

Abridged Version Presented at the Pluralstic Alignment Workshop, NeurIPS 2024
ICLR 2025  Oral Presentation    Paper / Code / arXiv / Project Page / Data

Learning Language Structures through Grounding
Freda Shi

Ph.D. Thesis, Toyota Technological Institute at Chicago, June 2024    Paper
Thesis of Distinction
AAAI 2025 New Faculty Highlight

LogogramNLP: Comparing Visual and Textual Representations of Ancient Logographic Writing Systems for NLP
Danlu Chen, Freda Shi, Aditi Agarwal, Jacobo Myerston, Taylor Berg-Kirkpatrick

ACL 2024  Best Paper Nominee    Paper / Project Page

Structured Tree Alignment for Evaluation of (Speech) Constituency Parsing
Freda Shi, Kevin Gimpel, Karen Livescu

ACL 2024 Paper / Code / arXiv

Audio-Visual Neural Syntax Acquisition
Cheng-I Jeff Lai*, Freda Shi*, Puyuan Peng*, Yoon Kim, Kevin Gimpel, Shiyu Chang, Yung-Sung Chuang, Saurabhchand Bhati, David Cox, David Harwath, Yang Zhang, Karen Livescu, James Glass

ASRU 2023 Paper / Code / arXiv

Language Models are Multilingual Chain-of-Thought Reasoners
Freda Shi*, Mirac Suzgun*, Markus Freitag, Xuezhi Wang, Suraj Srivats, Soroush Vosoughi, Hyung Won Chung, Yi Tay, Sebastian Ruder, Denny Zhou, Dipanjan Das, Jason Wei

ICLR 2023 Paper / arXiv / Data

Natural Language to Code Translation with Execution
Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, Sida I. Wang

EMNLP 2022 Paper / Code / arXiv

Substructure Distribution Projection for Zero-Shot Cross-Lingual Dependency Parsing
Freda Shi, Kevin Gimpel, Karen Livescu

ACL 2022 Paper / Code / arXiv

Grammar-Based Grounded Lexicon Learning
Jiayuan Mao, Haoyue Shi, Jiajun Wu, Roger P. Levy, Joshua B. Tenenbaum

NeurIPS 2021 Paper / arXiv / Project Page

Bilingual Lexicon Induction via Unsupervised Bitext Construction and Word Alignment
Haoyue Shi, Luke Zettlemoyer, Sida I. Wang

ACL-IJCNLP 2021  Best Paper Nominee    Paper / Code / arXiv

Visually Grounded Neural Syntax Acquisition
Haoyue Shi*, Jiayuan Mao*, Kevin Gimpel, Karen Livescu

ACL 2019  Best Paper Nominee    Paper / Code / arXiv

On Tree-Based Neural Sentence Modeling
Haoyue Shi, Hao Zhou, Jiaze Chen, Lei Li

EMNLP 2018 Paper / Code / arXiv

Learning Visually-Grounded Semantics from Contrastive Adversarial Samples
Haoyue Shi*, Jiayuan Mao*, Tete Xiao*, Yuning Jiang, Jian Sun

COLING 2018 Paper / Code / arXiv