Overview
Project Team
Publications
Software

Datasets
Media

Bayesian Thinking on Your Feet: Embedding Generative Models in Reinforcement Learning for Sequentially Revealed Data

Project funded by the National Science Foundation (IIS-1320538)
PI: Jordan Boyd-Graber, Co-PI: Hal Daumé III, University of Maryland

Overview

The goal of this project is to create algorithms that can “think on their feet”, i.e. to incrementally process input and to decide when enough information has been received to act on those data. This research requires innovation in two areas: content models (to make accurate predictions, even when not all available information is available) and policies (to know when to trust the outputs of the content models---or know they won't get better---versus waiting for more information).

We are applying these models to two problems: synchronous machine translation (or "machine simultaneous interpretation") and question answering (when questions are revealed one piece at a time).

Synchronous machine translation is when a sentence is being produced one word at a time in a foreign language and we want to produce a translation in English simultaneously (i.e., with as little delay between a foreign language word and its English translation). In the machine translation setting, the content model predicts what words are going to appear in the input stream, even though they might not have been seen yet. This is particularly important in verb-final languages like German or Japanese, where an English translation can barely begin until the verb is seen. For the simultaneous translation problem, our content model must predict unseen elements of the sentence (e.g., the main verb in German and Japanese, or relative clauses in Japanese, or post-positions in Japanese). The job of the policy is to decide when to trust the content prediction. It must learn to balance incorrect translation versus timely translations, and must use those predictions to translate the sentence.

For question answering, we use a specially designed dataset that challenges humans: a trivia game called quiz bowl. These questions are written so that they can be interrupted by someone who knows more about the answer; that is, harder clues are at the start of the question and easier clues are at the end of the question. The content model produces guesses of what the answer could be and the policy must decide when to accept the guess.

Quiz bowl is a fun game with excellent opportunities for outreach, but it is also related to core challenges in natural language processing: classification (sorting inputs and making predictions), discourse (using pragmatic clues to guess what will come next), and coreference resolution (knowing which entities are discussed from oblique mentions).

<< back to top

Project Team

Jordan Boyd-Graber Jordan Boyd-Graber
Assistant Professor, Computer Science (Colorado)
Hal Daume III Hal Daumé III
Associate Professor, Computer Science (Maryland)
Danny Bouman Danny Bouman
Undergraduate student, Computer Science (BS 2014, Maryland)
Leonardo Claudino Leonardo Claudino
Ph.D. student, Computer Science (Maryland)
Anupam Guha Anupam Guha
Ph.D. student, Computer Science (Maryland)
Alvin Grissom II Alvin Grissom II
Ph.D. student, Computer Science (Colorado)
He He He He
Ph.D. student, Computer Science (Maryland)
Stephanie Hwa Stephanie Hwa
Undergraduate student, Computer Science (BS 2014, Maryland)
Mohit Iyyer Mohit Iyyer
Ph.D. student, Computer Science (Maryland)
John Morgan John Morgan
MS student, Computer Science (Maryland)
Khanh Nguyen Khanh Nguyen
Ph.D. student, Computer Science (Maryland)
Pedro Rodriguez Pedro Rodriguez
Ph.D. student, Computer Science (Colorado)
Davis Yoshida Davis Yoshida
MS student, Computer Science (BS 2016, MS 2017, Colorado)

<< back to top

Publications (Selected)

Software

Datasets

Media

Workshops Organized by Project Members

Acknowledgments

This work is supported by the National Science Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the researchers and do not necessarily reflect the views of the National Science Foundation.