PhD Defence • Natural Language Processing • Novel Methods for Natural Language Modeling and PretrainingExport this event to calendar

Tuesday, January 10, 2023 — 9:00 AM to 12:00 PM EST

Please note: This PhD defence will take place online.

He (Richard) Bai, PhD candidate
David R. Cheriton School of Computer Science

Supervisor: Professor Ming Li

This thesis is about modeling text and speech sequences to achieve lower perplexity, better generation, and benefit downstream language tasks; specifically, we address the problem of modeling natural language sequences (text and speech) with Transformer-based language models. We present three new techniques that improve sequence modeling in different ways.

First, we propose Segment-Aware Language Modeling to encode richer positional information for language modeling, by replacing the original token position encoding with a combined position encoding of paragraph, sentence, and token. By applying our approach to Transformer-XL, we train a new language model, Segatron-XL, that achieves a 6.6–7.8% relative reduction in perplexity. Additionally, BERT pretrained with our method — SegaBERT — outperforms BERT on general language understanding, sentence representation learning, and machine reading comprehension tasks. Furthermore, our SegaBERT-large model outperforms RoBERTa-large on zero-shot STS tasks. These experimental results demonstrate that our proposed Segatron works on both language models with relative position embeddings and pretrained language models with absolute position embeddings.

Second, we propose Hypernym-Instructed Language Modeling to map words that have a common WordNet hypernym to the same class and trains large neural LMs by gradually annealing from predicting the class to token prediction during training. Class-based prediction leads to an implicit context aggregation for similar words and thus can improve generalization for rare words. Empirically, this curriculum learning strategy consistently reduces perplexity over various large, highly-performant state-of-the-art Transformer-based models on two datasets, Wikipedia and Arxiv. Our analysis shows that the performance improvement is achieved without sacrificing performance on rare words.

Third, we propose Alignment-Aware Acoustic and Text Modeling to reconstructs masked acoustic signals with text input and acoustic-text alignment during training. In this way, the pretrained model can generate high quality of reconstructed spectrogram, which can be applied to the speech editing and new speaker TTS directly. Experiments show A3T outperforms SOTA models on speech editing and improves multi-speaker speech synthesis without the external speaker verification model.


To join this PhD defence on Zoom, please go to https://uwaterloo.zoom.us/j/9968684700.

Location 
Online PhD defence
200 University Avenue West

Waterloo, ON N2L 3G1
Canada
Event tags 

S M T W T F S
31
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
1
2
3
4
  1. 2024 (118)
    1. May (4)
    2. April (37)
    3. March (27)
    4. February (25)
    5. January (25)
  2. 2023 (296)
    1. December (20)
    2. November (28)
    3. October (15)
    4. September (25)
    5. August (30)
    6. July (30)
    7. June (22)
    8. May (23)
    9. April (32)
    10. March (31)
    11. February (18)
    12. January (22)
  3. 2022 (245)
  4. 2021 (210)
  5. 2020 (217)
  6. 2019 (255)
  7. 2018 (217)
  8. 2017 (36)
  9. 2016 (21)
  10. 2015 (36)
  11. 2014 (33)
  12. 2013 (23)
  13. 2012 (4)
  14. 2011 (1)
  15. 2010 (1)
  16. 2009 (1)
  17. 2008 (1)