Receive a weekly summary and discussion of the top papers of the week by leading researchers in the field.

In Scientific reports ; h5-index 158.0

We propose a unified data-driven reduced order model (ROM) that bridges the performance gap between linear and nonlinear manifold approaches. Deep learning ROM (DL-ROM) using deep-convolutional autoencoders (DC-AE) has been shown to capture nonlinear solution manifolds but fails to perform adequately when linear subspace approaches such as proper orthogonal decomposition (POD) would be optimal. Besides, most DL-ROM models rely on convolutional layers, which might limit its application to only a structured mesh. The proposed framework in this study relies on the combination of an autoencoder (AE) and Barlow Twins (BT) self-supervised learning, where BT maximizes the information content of the embedding with the latent space through a joint embedding architecture. Through a series of benchmark problems of natural convection in porous media, BT-AE performs better than the previous DL-ROM framework by providing comparable results to POD-based approaches for problems where the solution lies within a linear subspace as well as DL-ROM autoencoder-based techniques where the solution lies on a nonlinear manifold; consequently, bridges the gap between linear and nonlinear reduced manifolds. We illustrate that a proficient construction of the latent space is key to achieving these results, enabling us to map these latent spaces using regression models. The proposed framework achieves a relative error of 2% on average and 12% in the worst-case scenario (i.e., the training data is small, but the parameter space is large.). We also show that our framework provides a speed-up of [Formula: see text] times, in the best case, and [Formula: see text] times on average compared to a finite element solver. Furthermore, this BT-AE framework can operate on unstructured meshes, which provides flexibility in its application to standard numerical solvers, on-site measurements, experimental data, or a combination of these sources.

Kadeethum Teeratorn, Ballarin Francesco, O’Malley Daniel, Choi Youngsoo, Bouklas Nikolaos, Yoon Hongkyu

2022-Nov-30