Challenges to decoding the intention behind natural instruction

Raquel Torres Peralta, Tasneem Kaochar, Ian R. Fasel, Clayton T. Morrison, Thomas J. Walsh, Paul R. Cohen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

4 Scopus citations


Currently, most systems for human-robot teaching allow only one mode of teacher-student interaction (e.g., teaching by demonstration or feedback), and teaching episodes have to be carefully set-up by an expert. To understand how we might integrate multiple, interleaved forms of human instruction into a robot learner, we performed a behavioral study in which 44 untrained humans were allowed to freely mix interaction modes to teach a simulated robot (secretly controlled by a human) a complex task. Analysis of transcripts showed that human teachers often give instructions that are nontrivial to interpret and not easily translated into a form useable by machine learning algorithms. In particular, humans often use implicit instructions, fail to clearly indicate the boundaries of procedures, and tightly interleave testing, feedback, and new instruction. In this paper, we detail these teaching patterns and discuss the challenges they pose to automatic teaching interpretation as well as the machine-learning algorithms that must ultimately process these instructions. We highlight the challenges by demonstrating the difficulties of an initial automatic teacher interpretation system.
Original languageAmerican English
Title of host publication2011 RO-MAN
Number of pages6
ISBN (Electronic)978-1-4577-1573-0, 978-1-4577-1572-3
ISBN (Print)9781457715716
StatePublished - Aug 2011


Dive into the research topics of 'Challenges to decoding the intention behind natural instruction'. Together they form a unique fingerprint.

Cite this