Ankit
Vadehra,
Master’s
candidate
David
R.
Cheriton
School
of
Computer
Science
The popularity of deep neural networks and vast amounts of readily available multi-domain textual data has seen the advent of various domain/task specific and domain agnostic dialogue systems. In our work, we present a general dialogue system that can provide a custom response based on the emotion or sentiment label selected. A dialogue system that can vary its response based on different affect labels can be very helpful for designing help-desk or social help assistant systems where the response has to follow a certain affective tone like positive, compassion etc.
We solve our task by designing a model that can generate coherent response utterance, and, augment it with the ability to separate sentence content and affect (emotion or sentiment). To create a dialogue system that generates response we utilize an encoder-decoder architecture called Sequence-to-Sequence (Seq2Seq) model. Whereas, adversarial learning is used to differentiate between the source utterance's content information and affect. Using our model diverse affect conditioned response can be generated by modifying the affect label. We explore different models trained on a Twitter gathered dialogue dataset and compare their effectiveness in solving our task of creating an emotion responsive dialogue system.