Jointly improving parsing and perception for natural language commands through human-robot dialog

Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Nick Walker, Yuqian Jiang, Harel Yedidsion, Justin Hart, Peter Stone, Raymond J. Mooney

Research output: Contribution to journalArticlepeer-review

34 Scopus citations

Abstract

In this work, we present methods for using human-robot dialog to improve language understanding for a mobile robot agent. The agent parses natural language to underlying semantic meanings and uses robotic sensors to create multi-modal models of perceptual concepts like red and heavy. The agent can be used for showing navigation routes, delivering objects to people, and relocating objects from one location to another. We use dialog clarification questions both to understand commands and to generate additional parsing training data. The agent employs opportunistic active learning to select questions about how words relate to objects, improving its understanding of perceptual concepts. We evaluated this agent on Amazon Mechanical Turk. After training on data induced from conversations, the agent reduced the number of dialog questions it asked while receiving higher usability ratings. Additionally, we demonstrated the agent on a robotic platform, where it learned new perceptual concepts on the fly while completing a real-world task.

Original languageEnglish
Pages (from-to)327-374
Number of pages48
JournalJournal Of Artificial Intelligence Research
Volume67
DOIs
StatePublished - 1 Feb 2020
Externally publishedYes

ASJC Scopus subject areas

  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Jointly improving parsing and perception for natural language commands through human-robot dialog'. Together they form a unique fingerprint.

Cite this