Vineet
John,
Master’s
candidate
David
R.
Cheriton
School
of
Computer
Science
This thesis tackles the problem of disentangling the latent style and content variables in a language modelling context. This involves splitting the latent representations of documents by learning which features of a document are discriminative of its style and content, and encoding these features separately using neural network models.
We propose a simple and effective approach, which incorporates auxiliary objectives: a multi-task classification objective, and dual adversarial objectives for label prediction and bag-of-words prediction, respectively. We show, both qualitatively and quantitatively, that the style and content are indeed disentangled in the latent space, using this approach.
We apply this disentangled latent representation learning method to attribute / label / style transfer in natural language generation. We achieve similar content preservation scores compared to previous state-of-the-art approaches, and significantly better style-transfer strength scores.