If interactive robots were able to pause during conversation and take a moment to gaze off into the distance as if pondering what the user was saying, research suggests this small change could make them seem less robotic.
Sean Andrist, a graduate researcher at the University of Wisconsin, studies ways researchers can improve how communicative characters, both digitally-constructed virtual agents and physical robots, maintain eye contact.
Specifically, Andrist’s research focuses on “gaze aversion,” or the moments when people glance away or look around during conversation.
Andrist has a particular interest in human-computer interaction and computer animation, so he started working on a cross-section of these two topics. He looked at how to make computer agents behave more naturally and work with users more intuitively, his co-advisor, Bilge Mutlu, a professor in the Computer Sciences Department, said.
To achieve a stronger application of gaze mechanisms in communicative characters, Andrist said he also studies social science aspects of how humans behave while communicating with one another.
In his most recent paper, Andrist outlined how speakers use these aversions in conversation, they signal to the listeners that cognitive processing is occurring, creating the impression that deep thought or creativity is being undertaken in formulating their speech.
His goals for the research were to see if robots’ gaze aversions could be perceived as intentional and meaningful by either signaling a pause for contemplation, setting conversational intimacy levels and establishing that it is still their turn to speak. He used parameters of gaze aversion length, timing and frequency to test his goals.
“It is one of those things where it’s really hard to nail down what the right thing to do is, but it’s really easy to know when something is wrong,” Andrist said. “For example, if someone is talking to you and they’re being sort of weird with the way they are looking at you, you get the sense that something’s off even if you can’t articulate why.”
Andrist presented the paper at the International Conference on Human-Robot Interaction in Germany earlier this month, where he was nominated for the best paper award and ranked among the top five out of 132 submissions.
An article written about the conference on NewScientist.com said researchers found giving robots a series of small behavioral cues can make help them appear more human, which makes people feel more comfortable interacting with them.
The concept of shifting from robots that appear starkly nonhuman to those that are easy to relate to or work with is called “bridging the uncanny valley,” the article said. Researchers believe large advances toward people accepting robots into their daily lives can be made through implementing small habits that humans make innately when programming such devices.
“You fall into this valley of uncanniness when you can tell the robot or character is just not doing the right thing, even if you can’t quite explain why,” Andrist said.
Andrist recently received the Chateaubriand Research Fellowship offered by the Embassy of France to spend five months at a graduate school there to further his studies on gaze, personality and motivation in order to create effective human-robot interaction in assistive setting.
The situations these communicative characters can be applied to include teaching settings, elderly care and assistance or possibly therapeutic practices, Andrist said.
Ultimately, Andrist said his hope for the future of his work entails combining his findings on gaze aversion with other gaze actions to provide comprehensive programming to integrate effective human and robot interaction into common settings.