Neuropsychological assessments that have traditionally been administered on paper, such as the Stroop test, are increasingly being administered in digital electronic formats on computers, touch screen tablets, and smartphones. Digitizing these assessments carries many benefits that appeal to neuropsychologists in the clinic, but especially for those in the research field where electronic administration may make it possible to gather data on a larger scale numerically and geographically with presumed greater reliability and precision scoring across administrations. Electronic administration can make tests more accessible, enabling easier data collection on a larger scale and more diverse population than paper tests. There are, however, a number of limitations. Different platforms of administration, such as computers and touch-screens, lead to variabilities in the consistency of assessment. Remote, unattended internet-based data collection can also lead to problems with self-report. Furthermore, in some conversions to digital versions of a task, changes may be made to the stimuli and procedure that might seemingly be benign, but could activate different cognitive responses, even changing the presumed construct validity of the test. Literature has shown somewhat mixed, but mostly positive outcomes for the validity of many computerized tests (Zourluogo et al. 2015, Weintraub et al. 2014). This study examines the comparability of electronic forms of the Stroop task, a common measure of executive function, to a traditional, previously validated 40-item Kaplan version Stroop task (Kang et al., 2012). The electronic Stroop tests examined in this study include those of two touch-screen based apps: BrainBaseline for the iPad and EncephelApp for smartphones. Using data from prospective as well as convenience samples, data from 46 participants (college and older adults) were used to examine concurrent validity. Correlations between paper and electronic versions were not significant in individual samples, but BrainBaseline metrics correlated significantly with paper Stroop (r = .54, p = .008). However, this indicates only 29% of the variance is shared between the paper and electronic forms of the Stroop task. A measure of computer familiarity seemed to play a role, as does the small sample sizes, but also it is likely that differences in the way stimuli are presented (block form - all interference trials at once vs. randomly ordered mixed congruent and incongruent trials) may indicate somewhat different components of executive function are being tapped. Additional research is needed and validity confirmed before electronic tests can be used interchangeably with paper versions of tests
Additional Speakers
Faculty Sponsors
Faculty Department/Program
Faculty Division
Presentation Type
Do You Approve this Abstract?
Approved
Time Slot
Room
Topic
Session
Moderator