Last update on .

Stefanie Tellex Wins A NASA Early Career Faculty Award For Human-Robot Collaboration On Complex Tasks

2016 continues to be a significant year for Assistant Professor Stefanie Tellex of Brown University's Department of Computer Science (Brown CS), who has just won a NASA Early Career Faculty Award for a recent proposal ("Human-Robot Collaboration on Complex Tasks"). Earlier this year, she received a Sloan Fellowship, was listed in MIT Technology Review's 2016 Breakthrough Technologies, and was named one of four "Women Who Changed Science In 2015" by Wired UK. The Early Career Faculty Award recognizes outstanding faculty researchers early in their careers, challenging them to examine the theoretical feasibility of ideas and approaches that are critical to making science, space travel, and exploration more effective, affordable, and sustainable.

Stefanie's prior research has uniquely prepared her for the task of contributing to the advancement of space exploration. She first developed probabilistic models that enable a robot to infer actions in the physical world that correspond to natural language descriptions, then extended this work to enable robots to ask natural language questions that clarify ambiguous commands and eventually to ask for help. 

"A robotic collaborator," she explains, "must be an integrated system which is able to perceive the environment and perform a diverse set of actions to accomplish different tasks. We've seen fully autonomous systems that can assemble furniture, delivery library books, and even cook, and our approach focuses on planning in very large state spaces like these, but integrated into a communication and learning framework. Existing approaches suffer from problems in slow inference because they don't use state representations and conditional independence assumptions that exploit problem structure."

Tellex's solution is to design efficient state representations and factored inference algorithms that lead to efficient reasoning and inference. This factored, hierarchical structure leads to information-gathering behavior, such as asking questions, in a framework that supports efficient incorporation of multimodal observations from low-level sensors, human language and gesture, and background knowledge. The aim of her proposal is to test the hypothesis that a system can increase speed and accuracy at inferring human intentions and increase the number of robots a single astronaut can supervise by inferring a person’s mental state from their language utterances and actively asking questions when confused. (For example, a robot could ask its operator if she meant the Philips screwdriver or the flat-head screwdriver.) The end goal is to achieve seamless human-robot cooperation on complex tasks, approaching the ease and accuracy of human-human collaboration.

"Robots that can use fluid language at multiple levels of abstraction," Stefanie says, "can flexibly respond to a person’s requests. The ultimate impact is a world where robots actively interpret a person’s instructions, asking questions when they're confused, and asking for help when they encounter a problem."