个人简介

孟琪,副研究员运筹与信息研究室


教育背景:

l  2009.09-2013.07 山东大学数学学院 学士

l  2013.09-2018.07 北京大学数学科学学院 博士


工作经历:

l  2018.07-2024.03 微软研究院 Principal Researcher

l  2024.03至今 中科院数学与系统科学研究院 应用数学所 副研究员

 



研究方向

机器学习理论
分布式优化方法,深度学习中的最优化方法,理论及泛化性质的研究
AI4Science
AI方法加速科学计算,例如神经算子,PINN,neuralODE方法及在流体力学,气象模拟,科学发现中的应用。

学术论文

  1. Deciphering and Integrating Physical Invariant for Neural Operator Learning with Various Physics. National Science Review, 2023, 共同通讯

  2. Complex-valued neural-operator-assisted soliton identification. Physical Review E, 2023, 第二作者

  3. Incorporating NODE with Pre-trained Neural Differential Operator for Learning Dynamics. Neurocomputing, 2023, 通讯作者

  4. Deep Latent Regularity Network for Modeling Stochastic Partial Differential Equations. AAAI-2023, 通讯作者

  5. NeuralStagger: accelerating physics-constrained neural PDE solver with spatial-temporal decomposition. ICML-2023, co-author

  6. O-GNN: incorporating ring priors into molecular modeling. ICLR-2023, co-author

  7. Deep Random Vortex Method for Simulating and Inference of Navier-Stokes Equations. Physics of Fluids, 2022, 通讯作者

  8. An Efficient Lorentz Equivariant Graph Neural Network for Jet Tagging. Journal of High Energy Physics, 2022, 通讯作者

  9. Stochastic Lag Time Parameterization for Markov State Models of Protein Dynamics. Journal of Physical Chemistry, 2022, co-author

  10. Power-law Dynamic Arising from Machine Learning. volume of Dirichlet Forms and Related Topics in honor of Masatoshi Fukushima's Beiju, 2022, co-author

  11. Equivariant Graph Neural Networks with Complete Local Frames. ICML-2022,共同通讯

  12. Does Momentum Change the Implicit Regularization on Separable Data? NeurIPS-2022, 第二作者

  13. PriorGrad: Improving Conditional Denoising Diffusion Models with Data-Dependent Adaptive Prior. ICLR-2022, co-author

  14. Constructing the Basis Path Set by Eliminating the Path Dependency. Journal of Systems Science and Complexity, 2022, 第二作者

  15. Machine Learning Non-Conservative Dynamics for New-Physics Detection. Physical Review E, 2021, co-author

  16. R-Drop: Regularized Dropout for Neural Networks. NeurIPS-2021, co-author

  17. Optimizing Information-theoretic Generalization Bound via Anisotropic Noise of SGLD. NeurIPS-2021, co-author

  18. On the Implicit Regularization for Adaptive Optimization Algorithms on Homogeneous Neural Networks. ICML-2021, 第二作者

  19. Path-BN: Towards Effective Batch Normalization in the Path Space for ReLU Networks. UAI-2021, 第二作者

  20. I4R: Promoting Deep Reinforcement Learning by the Indicator for Expressive Representations. IJCAI-2020, 第二作者

  21. Reinforcement Learning with Dynamic Boltzmann Softmax Updates. IJCAI-2020, co-author

  22. G-SGD: Optimizing ReLU Neural Networks in its Positively Scale-Invariant Space. ICLR-2019, 第一作者

  23. Convergence Analysis of Distributed Stochastic Gradient Descent with Shuffling. Neurocomputing, 2019, 第一作者

  24. Capacity Control of ReLU Neural Networks by Basis-Path Norm. AAAI-2019, 第二作者

  25. Continuous View on Gradient Descent Method Methods and its Asynchronous Variants. AAAI-2018, 第二作者

  26. Generalization Error Bounds for Optimization Algorithms via Stability. AAAI-2017, 第一作者

  27. Asynchronous Stochastic Proximal Optimization Algorithms with Variance Reduction. AAAI-2017, 第一作者

  28. Asynchronous Stochastic Gradient Descent with Delay Compensation. ICML-2017, 第二作者

  29. LightGBM: A Highly Efficient Gradient Boosting Decision Tree. NeurIPS-2017, 第二作者

  30. A communication-Efficient Parallel Algorithm for Decision Tree. NeurIPS-2016, 第一作者


我的团队


联系方式

办公室:思源楼306

电话:(010) 82541652

meq@amss.ac.cn