The rise of Generative Pre-Trained Transformers (GPTs) and Large Language Models (LLMs), colloquially referred to as artificial intelligence (AI) platforms, has implications for education and student learning. Currently, the most popular of these platforms is Chat GPT, which utilizes an extensive training dataset to generate human-like responses to almost any kind of input. However, the way students use these models in educational settings is largely unknown, and experts have warned that over-reliance may impair important skill development, including those related to critical thinking and source evaluation. To address this, the current study assessed student use of AI platforms and a potential explanation for why students may rely on these platforms over other sources (e.g., academic websites or human experts). Participants first answered questions on their use of AI platforms in general and specifically when completing course assignments. This was followed by a credibility evaluation task, where participants saw explanations of various phenomena (e.g., explaining why humans hiccup) that were attributed to different sources, and rated their perceived credibility and validity. Explanations were either attributed to: (1) Chat GPT, (2) an online academic website, (3) a human expert, or (4) unlabeled. Based on prior research, it is predicted that there will be a positive relationship between use of AI platforms to complete assignments and credibility ratings of Chat GPT explanations. If this result is also supported by answers to initial free-response questions, this would implicate perceived trust as a key factor determining student use of AI platforms.
Primary Speaker
Faculty Sponsors
Faculty Department/Program
Faculty Division
Presentation Type
Do You Approve this Abstract?
Approved