Humans communicate through verbal and nonverbal cues, one such non verbal cue is eye gaze. In a 2007 study Hanna and Brennan found that participants were able to use their partner's eye gaze as an early cue in order to distinguish between two similar looking targets. The main focus of my study is to answer whether this assistance in disambiguation applies in human-robot interaction as well.
The robot I will be using is SARAH. SARAH can move and a face with moving irises is shown on a screen. SARAH will ask the participant to identify a target object, which may look similar to other targets. My experiment has a 3x2 design wherein SARAH will either not move her eyes, shift her gaze toward the target, or shift both her eye gaze and body toward the target with the goal of the participant identifying the correct target. The participants will also be told that SARAH is acting autonomously or that she is being controlled by a human. After the experiment, participants will complete a survey where they report their comfort and engagement levels, and how natural SARAH’s behavior seemed to them.
I hypothesize that when SARAH uses eye gaze to indicate the target object, the human participants will react faster. This more human-like behavior will lead to higher self-reported engagement and comfort levels. Furthermore, I think participants will identify the target faster and report higher engagement and comfort levels when they are aware someone is controlling SARAH because they will have higher trust in the robot’s capabilities than when they believe the robot is acting autonomously.