STUDY OBJECTIVES : Acoustic analysis of isolated events and snoring by previous researchers suggests a correlation between individual acoustic features and individual site of collapse events. In this study, we hypothesised that multi-parameter evaluation of snore sounds during natural sleep would provide a robust prediction of the predominant site of airway collapse.
METHODS : The audio signals of 58 OSA patients were recorded simultaneously with full night polysomnography. The site of collapse was determined by manual analysis of the shape of the airflow signal during hypopnoea events and corresponding audio signal segments containing snore were manually extracted and processed. Machine learning algorithms were developed to automatically annotate the site of collapse of each hypopnoea event into three classes (lateral wall, palate and tongue-base). The predominant site of collapse for a sleep period was determined from the individual hypopnoea annotations and compared to the manually determined annotations. This was a retrospective study that used cross-validation to estimate performance.
RESULTS : Cluster analysis showed that the data fits well in two clusters with a mean silhouette coefficient of 0.79 and an accuracy of 68% for classifying tongue/non-tongue collapse. A classification model using linear discriminants achieved an overall accuracy of 81% for discriminating tongue/non-tongue predominant site of collapse and accuracy of 64% for all site of collapse classes.
CONCLUSIONS : Our results reveal that the snore signal during hypopnoea can provide information regarding the predominant site of collapse in the upper airway. Therefore, the audio signal recorded during sleep could potentially be used as a new tool in identifying the predominant site of collapse and consequently improving the treatment selection and outcome.
Sebastian Arun, Cistulli Peter A, Cohen Gary, de Chazal Philip
airflow signal, hypopnoea, machine learning, obstructive sleep apnoea, predominant site of collapse, snore recording