IEEE Journal of Biomedical and Health Informatics, vol. 24, no.
11, pp. 3258-3267, Nov. 2020
Objective: Birth asphyxia is one of the leading causes of neonatal deaths. A
key for survival is performing immediate and continuous quality newborn
resuscitation. A dataset of recorded signals during newborn resuscitation,
including videos, has been collected in Haydom, Tanzania, and the aim is to
analyze the treatment and its effect on the newborn outcome. An important step
is to generate timelines of relevant resuscitation activities, including
ventilation, stimulation, suction, etc., during the resuscitation episodes.
Methods: We propose a two-step deep neural network system, ORAA-net, utilizing
low-quality video recordings of resuscitation episodes to do activity
recognition during newborn resuscitation. The first step is to detect and track
relevant objects using Convolutional Neural Networks (CNN) and post-processing,
and the second step is to analyze the proposed activity regions from step 1 to
do activity recognition using 3D CNNs. Results: The system recognized the
activities newborn uncovered, stimulation, ventilation and suction with a mean
precision of 77.67 %, a mean recall of 77,64 %, and a mean accuracy of 92.40 %.
Moreover, the accuracy of the estimated number of Health Care Providers (HCPs)
present during the resuscitation episodes was 68.32 %. Conclusion: The results
indicate that the proposed CNN-based two-step ORAAnet could be used for object
detection and activity recognition in noisy low-quality newborn resuscitation
videos. Significance: A thorough analysis of the effect the different
resuscitation activities have on the newborn outcome could potentially allow us
to optimize treatment guidelines, training, debriefing, and local quality
improvement in newborn resuscitation.
Øyvind Meinich-Bache, Simon Lennart Austnes, Kjersti Engan, Ivar Austvoll, Trygve Eftestøl, Helge Myklebust, Simeon Kusulla, Hussein Kidanto, Hege Ersdal
2023-03-14