In IEEE transactions on bio-medical engineering
Deep learning is widely used to decode the electroencephalogram (EEG) signal. However, there are few attempts to specifically study how to explain EEG-based deep learning models. In this paper, we review the related works that attempt to explain EEG-based models. And we find that the existing methods are not perfect enough to explain the EEG-based model due to the non-stationary nature, high inter-subject variability and dependency of EEG data. The characteristics of the EEG data require the explanation to incorporate the instance-level saliency identification and the context information of EEG data. Recently, mask perturbation is proposed to explain deep learning model. Inspired by the mask perturbation, we propose a new context-aware perturbation method to address these concerns. Our method not only extends the scope to the instance level but can capture the representative context information when estimating the saliency map. In addition, we point out another role of context information in explaining the EEG-based model. The context information can also help suppress the artifacts in the EEG-based deep learning model. In practice, some users might want a simple version of the explanation, which only indicates a few features as salient points. To further improve the usability of our method, we propose an optional area limitation strategy to restrict the highlighted region. In the experiment section, we select three representative EEG-based models and implement them on the emotional EEG dataset DEAP. The results of the experiments support the advantages of our method.
Wang Hanqi, Zhu Xiaoguang, Chen Tao, Li Chengfang, Song Liang
2022-Oct-31