Lili
Mou,
Postdoctoral
fellow
University
of
Waterloo
Natural language processing (NLP) differs from image/speech processing, as all natural language components are discrete, such as characters, words, sentences, and paragraphs. Such property brings unique challenges to NLP, especially when we deal with the discrete latent space and discrete output space.
In this talk, I will present several of my recent studies that address this issue. For the discrete latent space, I will introduce a neural-symbolic framework that relaxes (discrete) reinforcement learning by (continuous) neural networks; experiments of syntactic and semantic parsing show that our approach makes uses of both discrete and continuous worlds, achieving high accuracy, high efficiency, as well as high interpretability. For the discrete output space, I will introduce a Metropolis-Hastings sampler for sentence generation, which enables several non-trivial applications, including keywords-to-sentence generation, unsupervised sentence paraphrasing, and unsupervised error correction; our approach achieves high performance, even when compared with supervised methods.
Bio: Lili Mou is currently a postdoc at the University of Waterloo. Lili received his BS and PhD degrees in 2012 and 2017, respectively, from School of EECS, Peking University. After that, he worked as a postdoc at the University of Waterloo as well as a research scientist at Adeptmind Inc. His research interests include deep learning applied to natural language processing as well as programming language processing. He has publications at top conferences and journals like AAAI, ACL, CIKM, COLING, EMNLP, ICML, IJCAI, INTERSPEECH, and TACL (in alphabetic order).