- PhD, Computer Science, Harvard University (2023)
- Master in Language Technologies, Carnegie Mellon University (2016)
- Bachelor of Engineering, Department of Automation, Tsinghua University (2014)
Yuntian Deng’s research interests center on the intersection of natural language processing, machine learning, and multi-agent systems. Specifically, he is interested in exploring how large language models (LLMs) can communicate and collaborate to solve complex tasks together, and how they can be trained to specialize in different domains for a division of labor. His key focus areas include:
- Inducing Latent Language for Inter-LLM Communication: Developing methods to induce a specialized language for LLM communication, thereby enabling LLMs to leverage each other’s expertise.
- Communication for Models Across Modalities: Extending Inter-LLM communication methods to enable collaboration among models that specialize in different modalities, such as language, image, and sensory data.
- Collaborative Training for Division of Labor among Models: Exploring ways to foster a division of labor among models, using communication as a tool to distribute knowledge among them during the training process.