Wenhu Chen will be joining the Cheriton School of Computer Science as a tenure-track Assistant Professor in July 2022.
Wenhu’s research aims to design more powerful natural language processing models to bridge the gap between human language and real-world data including text, graphs, tables, and images. In particular, Wenhu has made substantial advances in three areas:
- neuro-symbolic reasoning for explainability,
- multi-hop and single hop reasoning for inference from heterogeneous data, and
- externalizing factual knowledge in language modelling.
While most advances in deep learning have come at the cost of explainability, Wenhu has shown how to combine logical and neural techniques in an innovative way to obtain explainability without sacrificing accuracy.
His second contribution, on inference from heterogeneous data, is also a major advance. Combining and reasoning effectively about multiple sources is a major challenge that is currently at the edge of what NLP systems can do, and in this area, Wenhu is already a leader.
Wenhu’s third contribution is already having substantial industry impact. Various companies (e.g., OpenAI, Google) are currently competing to develop the largest possible language model, requiring resources that are beyond all but the largest organizations. Wenhu’s work reduces the size of a language model by externalizing factual knowledge into knowledge graphs, leaving the language model to store only language information.
Wenhu is a PhD student in the Computer Science Department at the University of California, Santa Barbara. He is advised by William Wang and Xifeng Yan. He also has an MSc in electronics engineering (2016) from RWTH Aachen University in Germany, and a BSc in electronics engineering (2014) from Huazhong University of Science and Technology in China.
As of May 2021, Wenhu’s publications have been cited collectively more than 730 times, with an h-index of 13 from Google Scholar.