arXiv Analytics

Sign in

arXiv:2302.05104 [cs.LG]AbstractReferencesReviewsResources

Monte Carlo Neural Operator for Learning PDEs via Probabilistic Representation

Rui Zhang, Qi Meng, Rongchan Zhu, Yue Wang, Wenlei Shi, Shihua Zhang, Zhi-Ming Ma, Tie-Yan Liu

Published 2023-02-10Version 1

Neural operators, which use deep neural networks to approximate the solution mappings of partial differential equation (PDE) systems, are emerging as a new paradigm for PDE simulation. The neural operators could be trained in supervised or unsupervised ways, i.e., by using the generated data or the PDE information. The unsupervised training approach is essential when data generation is costly or the data is less qualified (e.g., insufficient and noisy). However, its performance and efficiency have plenty of room for improvement. To this end, we design a new loss function based on the Feynman-Kac formula and call the developed neural operator Monte-Carlo Neural Operator (MCNO), which can allow larger temporal steps and efficiently handle fractional diffusion operators. Our analyses show that MCNO has advantages in handling complex spatial conditions and larger temporal steps compared with other unsupervised methods. Furthermore, MCNO is more robust with the perturbation raised by the numerical scheme and operator approximation. Numerical experiments on the diffusion equation and Navier-Stokes equation show significant accuracy improvement compared with other unsupervised baselines, especially for the vibrated initial condition and long-time simulation settings.

Related articles: Most relevant | Search more
arXiv:2010.14054 [cs.LG] (Published 2020-10-27)
A Probabilistic Representation of Deep Learning for Improving The Information Theoretic Interpretability
arXiv:1812.04426 [cs.LG] (Published 2018-11-30)
PDE-Net 2.0: Learning PDEs from Data with A Numeric-Symbolic Hybrid Deep Network
arXiv:2306.14511 [cs.LG] (Published 2023-06-26)
TaylorPDENet: Learning PDEs from non-grid Data