In Journal of neural engineering ; h5-index 52.0
Subjective tinnitus is an auditory phantom perceptual disorder without an objective biomarker. Fast and efficient diagnostic tools will advance clinical practice by detecting or confirming the condition, tracking change in severity, and monitoring treatment response. Motivated by evidence of subtle anatomical, morphological, or functional information in magnetic resonance images (MRI) of the brain, we examine data-driven machine learning methods for joint tinnitus classification (tinnitus or no tinnitus) and tinnitus severity prediction. We propose a deep multi-task multimodal framework for joint functionalities using structural MRI (sMRI) data. To leverage cross-information multimodal neuroimaging data, we integrate two modalities of 3-dimensional sMRI - T1 weighted (T1w) and T2 weighted (T2w) images. To explore the key components in the MR images that drove task performance, we segment both T1w and T2w images into three different components - cerebrospinal fluid (CSF), grey matter (GM) and white matter (WM), and evaluate performance of each segmented image. Results demonstrate that our multimodal framework capitalizes on the information across both modalities (T1w and T2w) for the joint task of tinnitus classification and severity prediction. Our model outperforms existing learning-based and conventional methods in terms of accuracy, sensitivity, specificity, and negative predictive value.
Lin Chieh-Te, Ghosh Sanjay, Hinkley Leighton B, Dale Corby L, Souza Ana C S, Sabes Jennifer H, Hess Christopher P, Adams Meredith E, Cheung Steven W, Nagarajan Srikantan S
2022-Dec-13
Deep learning, Neuroimaging biomarker, Structural MR Images, Tinnitus Classification