In Environmental pollution (Barking, Essex : 1987)
Being able to monitor PM2.5 across a range of scales is incredibly important for our ability to understand and counteract air pollution. Remote monitoring PM2.5 using satellite-based data would be incredibly advantageous to this effort, but current machine learning methods lack necessary interpretability and predictive accuracy. This study details the development of a new Spatial-Temporal Interpretable Deep Learning Model (SIDLM) to improve the interpretability and predictive accuracy of satellite-based PM2.5 measurements. In contrast to traditional deep learning models, the SIDLM is both "wide" and "deep." We comprehensively evaluated the proposed model in China using different input data (top-of-atmosphere (TOA) measurements-based and aerosol optical depth (AOD)-based, with or without meteorological data) and different spatial resolutions (10 km, 3 km, and 250 m). TOA-based SIDLM PM2.5 achieved the best predictive accuracy in China, with root-mean-square errors (RMSE) of 15.30 and 15.96 μg/m3, and R2 values of 0.70 and 0.66 for PM2.5 predictions at 10 km and 3 km spatial resolutions, respectively. Additionally, we tested the SIDLM in PM2.5 retrievals at a 250 m spatial resolution over Beijing, China (RMSE = 16.01 μg/m3, R2 = 0.62). Furthermore, SIDLM demonstrated higher accuracy than five machine learning inversion methods, and also outperformed them regarding feature extraction and the interpretability of its inversion results. In particular, modeling results indicated the strong influence of the Tongzhou district on the principle PM2.5 in the Beijing urban area. SIDLM-extracted temporal characteristics revealed that summer months (June-August) might have contributed less to PM2.5 concentrations, indicating the limited accumulation of PM2.5 in these months. Our study shows that SIDLM could become an important tool for other earth observation data in deep learning-based predictions and spatiotemporal analysis.
Yan Xing, Zang Zhou, Jiang Yize, Shi Wenzhong, Guo Yushan, Li Dan, Zhao Chuanfeng, Husi Letu
Deep learning, Interpretability, MODIS, PM(2.5)