In medRxiv : the preprint server for health sciences
The success of artificial intelligence in clinical environments relies upon the diversity and availability of training data. In some cases, social media data may be used to counterbalance the limited amount of accessible, well-curated clinical data, but this possibility remains largely unexplored. In this study, we mined YouTube to collect voice data from individuals with self-declared positive COVID-19 tests during time periods in which Omicron was the predominant variant 1,2,3 , while also sampling non-Omicron COVID-19 variants, other upper respiratory infections (URI), and healthy subjects. The resulting dataset was used to train a DenseNet model to detect the Omicron variant from voice changes. Our model achieved 0.85/0.80 sensitivity/specificity in separating Omicron samples from healthy samples and 0.76/0.70 sensitivity/specificity in separating Omicron samples from symptomatic non-COVID samples. In comparison with past studies, which used scripted voice samples, we showed that leveraging the intra-sample variance inherent to unscripted speech enhanced generalization. Our work introduced novel design paradigms for audio-based diagnostic tools and established the potential of social media data to train digital diagnostic models suitable for real-world deployment.
Anibal James T, Landa Adam J, Nguyen Hang T, Peltekian Alec K, Shin Andrew D, Song Miranda J, Christou Anna S, Hazen Lindsey A, Rivera Jocelyne, Morhard Robert A, Bagci Ulas, Li Ming, Clifton David A, Wood Bradford J
2022-Oct-06