Facial emotion recognition (FER) is a critical component of human-computer interaction (HCI), enabling robots to interpret and respond to human emotions. Deep learning models, such as DeepFace, have significantly improved FER accuracy; however, concerns about bias across demographic groups remain. This research investigates whether FER models exhibit performance disparities based on gender and ethnicity, potentially leading to misclassifications and ethical concerns. Using OpenCV for face detection and DeepFace for emotion analysis, this study evaluates how well these models perform across diverse demographic groups. The research highlights the challenges in real-time emotion detection, including variability in facial expressions, dataset biases, and computational constraints. As AI-driven FER becomes more integrated into assistive robotics, surveillance, and mental health applications, ensuring fairness and accuracy across populations is crucial. This study aims to contribute to the field by identifying potential biases and discussing strategies for improving model fairness. Addressing these issues is essential to making FER systems more reliable, ethical, and inclusive for real-world applications.
Primary Speaker
Faculty Sponsors
Faculty Department/Program
Faculty Division
Presentation Type
Do You Approve this Abstract?
Approved
Time Slot
Room
Session
Moderator