Training End-to-End Dialogue Systems with the Ubuntu Dialogue Corpus

Authors

  • Ryan Lowe School of Computer Science, McGill University
  • Nissan Pow School of Computer Science, McGill University
  • Iulian Vlad Serban DIRO, Universit ́e de Montr ́eal
  • Laurent Charlin School of Computer Science, McGill University
  • Chia-Wei Liu School of Computer Science, McGill University
  • Joelle Pineau School of Computer Science, McGill University

DOI:

https://doi.org/10.5087/dad.2017.102

Abstract

In this paper, we construct and train end-to-end neural network-based dialogue systems usingan updated version of the recent Ubuntu Dialogue Corpus, a dataset containing almost 1 million multi-turn dialogues, with a total of over 7 million utterances and 100 million words. This dataset is interesting because of its size, long context lengths, and technical nature; thus, it can be used to train large models directly from data with minimal feature engineering, which can be both time consuming and expensive. We provide baselines  in two different environments: one where models are trained to maximize the log-likelihood of a generated utterance  conditioned on the context of the conversation, and one where models are trained to select the correct next response from a list of candidate responses. These are both evaluated on a recall task that we call Next Utterance Classification (NUC), as well as other generation-specific metrics. Finally, we provide a qualitative error analysis to help determine the most promising directions for future research on the Ubuntu  Dialogue Corpus, and for end-to-end dialogue systems in general.

Downloads

Published

2017-01-20

Issue

Section

Articles