Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Annals of family medicine

Context: Qualitative research - crucial for understanding human behavior - remains underutilized, in part due to the time and cost of annotating qualitative data (coding). Artificial intelligence (AI) has been suggested as a means to reduce those burdens. Older AI techniques (Latent Semantic Indexing / Latent Dirichlet Allocation (LSI/LDA)) have fallen short, in part because qualitative data is rife with idiom, non-standard expressions, and jargon. Objective: To develop an AI platform using updated techniques to augment qualitative data coding. Study Design and Analysis: We previously completed traditional qualitative analysis of a large dataset, with 11 qualitative categories and 72 subcategories (categories), and a final Cohen's kappa ≥ 0.65 as a measure of human inter-coder reliability (ICR) after coding. We built our Automated Qualitative Assistant (AQUA) using a semi-classical approach, replacing LSI/LDA with a graph-theoretic topic extraction and clustering method. AQUA was given the previously-identified qualitative categories and tasked with coding free-text data into those categories. Item coding was scored using cosine-similarity. Population Studied: Pennsylvanian adults. Instrument: Free-text responses to five open ended questions related to the COVID-19 pandemic (e.g. "What worries you most about the COVID-19 pandemic?"). Outcome Measures: AQUA's coding was compared to human coding using Cohen's kappa. This was done on all categories in aggregate, and also on category clusters to identify category groups amenable to AQUA support. AQUA's time to complete coding was compared to the time taken by the human coding team. Dataset: Five unlimited free-text survey answers from 538 responders. Results: AQUA's kappa for all categories was low (kappa~0.45), reflecting the challenge of automated analysis of diverse language. However, for several 3-category combinations (with less linguistic diversity), AQUA performed comparably to human coders, with an ICR kappa range of 0.62 to 0.72 based on test-train split. AQUA's analysis (including human interpretation) took approximately 5 hours, compared to approximately 30 person hours for traditional coding. Conclusions: AQUA enables qualitative researchers to identify categories amenable to automated coding, and to rapidly conduct that coding on the entirety of very large datasets. This saves time and money, and avoids limitations inherent in limiting qualitative analysis to limited samples of a given dataset.

Lennon Robert, Calo William, Miller Erin, Zgierska Aleksandra, Van Scoy Lauren, Fraleigh Robert