主講人:Minseok Choi,Assistant Professor at Pohang University of Science and Technology
時(shí)間:2025年12月12日14:00
地點(diǎn):徐匯校區(qū)三號(hào)樓332室
舉辦單位:數(shù)理學(xué)院
主講人介紹:Minseok Choi received his B.S. and M.S. degrees from Seoul National University, South Korea, and his Ph.D. in Applied Mathematics from Brown University, USA. He was a Postdoctoral Researcher at Princeton University before joining Pohang University of Science and Technology (POSTECH), where he is currently an assistant professor of mathematics. He also serves as the Vice President of the East Asia Society for Industrial and Applied Mathematics (EASIAM). His research focuses on scientific machine learning, uncertainty quantification, and applied mathematics.
內(nèi)容介紹:Deep learning has emerged as a promising paradigm for solving partial differential equations (PDEs), yet standard approaches often struggle with data scarcity, extrapolation, and long-time integration stability. In this seminar, we present two distinct strategies to address these challenges by embedding physical structure and temporal causality into the learning process.
First, we introduce the Physics-Informed Laplace Neural Operator (PILNO). While purely data-driven operators like the Laplace Neural Operator (LNO) or Fourier Neural Operator (FNO) or DeepONet are powerful, they require vast datasets and generalize poorly to out-of-distribution inputs. By directly integrating governing physical laws into the LNO framework, PILNO significantly reduces data dependency and enhances extrapolation capabilities, proving effective even in small-data regimes.
Second, we discuss Causality-Enforced Evolutional Networks (CEENs) for solving time-dependent PDEs. Standard Physics-Informed Neural Networks (PINNs) frequently fail in long-time integration due to a lack of temporal causality, leading to biased optimization. We propose a novel sequential training framework based on domain decomposition that strictly enforces temporal causality. This approach not only enables accurate long-time simulations where original PINNs fail but also improves computational efficiency through parallelization.
Together, these methods demonstrate how incorporating physical and causal inductive biases can lead to more robust, accurate, and data-efficient neural PDE solvers.



