Recognizing human emotions can be a useful skill for robots. Emotion recognition can help robots understand our responses to robot movements and actions. Human emotions can be recognized through facial expressions. Facial Expression Recognition (FER) is a well-established research area, however the majority of prior research is based on static datasets of images. With robots often the subject is moving, the robot is moving, or both. The purpose of this research is to determine the impact of movement on facial expression recognition. We apply a pre-existing model for FER, which performs around 70.86% on a given collection of images. We experiment with three different conditions: No motion by subject or robot, motion by one of the human or robot, and finally both human and robot in motion. We then measure the impact on FER accuracy introduced by these movements. This research relates to Computer Vision, Machine Learning, and Human-Robot Interaction.
Primary Speaker
Faculty Sponsors
Faculty Department/Program
Faculty Division
Presentation Type
Do You Approve this Abstract?
Approved
Time Slot
Room
Session
Moderator