Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Pattern recognition letters

As wearing face masks is becoming an embedded practice due to the COVID-19 pandemic, facial expression recognition (FER) that takes face masks into account is now a problem that needs to be solved. In this paper, we propose a face parsing and vision Transformer-based method to improve the accuracy of face-mask-aware FER. First, in order to improve the precision of distinguishing the unobstructed facial region as well as those parts of the face covered by a mask, we re-train a face-mask-aware face parsing model, based on the existing face parsing dataset automatically relabeled with a face mask and pixel label. Second, we propose a vision Transformer with a cross attention mechanism-based FER classifier, capable of taking both occluded and non-occluded facial regions into account and reweigh these two parts automatically to get the best facial expression recognition performance. The proposed method outperforms existing state-of-the-art face-mask-aware FER methods, as well as other occlusion-aware FER methods, on two datasets that contain three kinds of emotions (M-LFW-FER and M-KDDI-FER datasets) and two datasets that contain seven kinds of emotions (M-FER-2013 and M-CK+ datasets).

Yang Bo, Wu Jianming, Ikeda Kazushi, Hattori Gen, Sugano Masaru, Iwasawa Yusuke, Matsuo Yutaka

2022-Dec

41A05, 41A10, 65D05, 65D17, Covid-19, Deep learning, Face mask, Face parsing, Facial expression recognition, Vision transformer