Spatial references and perspective in natural language instructions for collaborative manipulation
August 2016 • Conference Paper
Shen Li (Carnegie Mellon University), Rosario Scalise (Carnegie Mellon University), Henny Admoni (Carnegie Mellon University), Stephanie Rosenthal, Siddhartha S. Srinivasa (Carnegie Mellon University)
In this work, we investigate spatial features and perspectives in human spatial references and compare word usage when instructing robots vs. instructing other humans.
This conference paper appears in the 25th IEEE International Symposium on Robot and Human Interactive Communication, August 2016.
As humans and robots collaborate together on spatial tasks, they must communicate clearly about the objects they are referencing. Communication is clearer when language is unambiguous which implies the use of spatial references and explicit perspectives. In this work, we contribute two studies to understand how people instruct a partner to identify and pick up objects on a table. We investigate spatial features and perspectives in human spatial references and compare word usage when instructing robots vs. instructing other humans. We then focus our analysis on the clarity of instructions with respect to perspective taking and spatial references. We find that only about 42% of instructions contain perspective-independent spatial references. There is a strong correlation between participants' accuracy in executing instructions and the perspectives that the instructions are given in, as well between accuracy and the number of spatial relations that were required for the instruction. We conclude that sentence complexity (in terms of spatial relations and perspective taking) impacts understanding, and we provide suggestions for automatic generation of spatial references.