Queer college students experience mental health symptoms at a higher rate than their heterosexual and cisgender counterparts; however, the mental health needs of numerous queer students are left unmet if they do not perceive that counselors have sufficient queer-specific training. With major advancements in AI in recent years, therapy chatbots have risen as a viable alternative to traditional counseling. However, LGBTQ+ biases have been detected within these technologies, which may cause ineffective and harmful content to be generated for queer users. This research examines the quality of interactions with a popular app-based therapy chatbot, Wysa, through the perspective of queer college students. The students sought support from Wysa in a series of hypothetical mental health and queer-specific challenges, and reported their views on the appropriateness, usefulness and potential impacts of the chatbot interactions. To supplement these findings, interviews with two campus professionals were conducted to provide insight into the needs of queer students, mental health service availability, and the suitability of conversational AI for queer mental health concerns. The key takeaway is that while Wysa is a beneficial tool for managing stress, anxiety, and depression, it is unfit to respond to queer-related stressors. Wysa blatantly ignores queer identity. The generic and incompetent responses frustrate and invalidate users seeking guidance with queer issues, which may feed into feelings of invisibility, isolation, and internalized stigma. Therapy chatbots must be ethically trained on LGBTQ+ struggles to constructively support queer college students and prevent harm.
Primary Speaker
Faculty Sponsors
Faculty Department/Program
Faculty Division
Presentation Type
Do You Approve this Abstract?
Approved