Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Ergonomics

Exposure to high and/or repetitive force exertions can lead to musculoskeletal injuries. However, measuring worker force exertion levels is challenging, and existing techniques can be intrusive, interfere with human-machine interface, and/or limited by subjectivity. In this work, computer vision techniques are developed to detect the force exertions using facial videos and wearable photoplethysmogram. Eighteen participants (19-24 years) performed isometric grip exertions at varying levels of maximum voluntary contraction. Novel features that predict forces were identified and extracted from video and photoplethysmogram data. Two experiments with two (High/Low) and three (0%MVC/50%MVC/100%MVC) labels were performed to classify exertions. The Deep Neural Network classifier performed the best with 96% and 87% accuracy for two- and three-level classifications. This approach was robust to leave subjects out during cross-validation (86% accuracy when 3-subjects were left out) and robust to noise (i.e., 89% accuracy for classifying talking activities as low force exertions).Practitioner Summary: Forceful exertions are contributing factors to musculoskeletal injuries, yet it remains difficult to measure in work environments. This paper presents an approach to estimate force exertion levels, which is less distracting to workers, easier to implement by practitioners, and could potentially be used in wide variety of workplaces.

Asadi Hamed, Zhou Guoyang, Lee Jae Joong, Aggarwal Vaneet, Yu Denny

2020-Mar-22

Computer Vision, Facial Expressions, High Force Exertions, Machine Learning