TY - JOUR
T1 - Jointly improving parsing and perception for natural language commands through human-robot dialog
AU - Thomason, Jesse
AU - Padmakumar, Aishwarya
AU - Sinapov, Jivko
AU - Walker, Nick
AU - Jiang, Yuqian
AU - Yedidsion, Harel
AU - Hart, Justin
AU - Stone, Peter
AU - Mooney, Raymond J.
N1 - Publisher Copyright:
© 2020 AI Access Foundation. All rights reserved.
PY - 2020/2/1
Y1 - 2020/2/1
N2 - In this work, we present methods for using human-robot dialog to improve language understanding for a mobile robot agent. The agent parses natural language to underlying semantic meanings and uses robotic sensors to create multi-modal models of perceptual concepts like red and heavy. The agent can be used for showing navigation routes, delivering objects to people, and relocating objects from one location to another. We use dialog clarification questions both to understand commands and to generate additional parsing training data. The agent employs opportunistic active learning to select questions about how words relate to objects, improving its understanding of perceptual concepts. We evaluated this agent on Amazon Mechanical Turk. After training on data induced from conversations, the agent reduced the number of dialog questions it asked while receiving higher usability ratings. Additionally, we demonstrated the agent on a robotic platform, where it learned new perceptual concepts on the fly while completing a real-world task.
AB - In this work, we present methods for using human-robot dialog to improve language understanding for a mobile robot agent. The agent parses natural language to underlying semantic meanings and uses robotic sensors to create multi-modal models of perceptual concepts like red and heavy. The agent can be used for showing navigation routes, delivering objects to people, and relocating objects from one location to another. We use dialog clarification questions both to understand commands and to generate additional parsing training data. The agent employs opportunistic active learning to select questions about how words relate to objects, improving its understanding of perceptual concepts. We evaluated this agent on Amazon Mechanical Turk. After training on data induced from conversations, the agent reduced the number of dialog questions it asked while receiving higher usability ratings. Additionally, we demonstrated the agent on a robotic platform, where it learned new perceptual concepts on the fly while completing a real-world task.
UR - http://www.scopus.com/inward/record.url?scp=85092131934&partnerID=8YFLogxK
U2 - 10.1613/JAIR.1.11485
DO - 10.1613/JAIR.1.11485
M3 - Article
AN - SCOPUS:85092131934
SN - 1076-9757
VL - 67
SP - 327
EP - 374
JO - Journal Of Artificial Intelligence Research
JF - Journal Of Artificial Intelligence Research
ER -