This abstract is only for the TI Steinmetz Day Special Session Abstract. My project combined the disciplines of Cognitive Neuroscience and Human-AI interaction. The goal of this research is to understand how people attribute authorship to written text, specifically what features lead them to judge writing as either AI-generated or human-written.
We are interested in studying how we humans evaluate language in the age of Large Language Models. What are the decision-making processes, and what factors and biases affect our judgment? To take steps towards understanding this, we conducted a mixed-methods user survey. For the survey, we collected text samples from ChatGPT and human-written texts from various blogs, journals, articles, opinion articles, Reddit, and academic essays. In the survey, participants were asked to read these text samples and identify whether they think the text is human-written or AI-generated, rate their confidence, and explain their answers briefly. To evaluate which factors have an effect on participants' responses and confidence, the collected texts varied in terms of narrative perspective (first vs. third person), writing style, factual accuracy, and topic. The results of this study showed how humans go about recognizing LLM content in everyday situations and can be used to guide the development of policies around AI transparency.