Applied Mathematics Seminar

Upcoming seminars

  • Francis Nier (Université de Paris 13, délégation MATHERIALS)

    Problème de Grushin et autres techniques spectrales basées sur le complément de Schur

    • Part I: 28 Mars from 9:30AM to 11:30AM, Room B211 [Slides]
    • Part II: 24 May from 9:30AM to 11:30AM, Room B211

    La méthode dite du problème de Grushin, dont le nom et les notations ont été fixés dans les premiers travaux de J. Sjöstrand (autour de 1970), est un outil maintenant très commun pour comprendre et étudier finement des problématiques d’asymptotique spectrale. En tant que méthode s’appuyant sur la formule du complément de Schur ila des liens avec la méthode de Feshbach populaire en physique mathématique ou les techniques dites de Lyapunov-Schmidt en systèmes dynamiques ou EDP non linéaires. L’approche du problème de Grushin est stable par perturbation. Combinée avec du calcul pseudodifférentiel, semiclassique et des découpages (micro)locaux, elle fournit une approche générale pour ramener des problèmes spectraux multi-échelles à des modèles calculables. En ce sens cette approche permet souvent de donner une formalisation mathématique précise de l’intuition des physiciens.

    Je commencerai par une présentation élémentaire dont une application rapide est la théorie de Fredholm et la théorie de Fredholm holomorphe. J’en profiterai pour rappeler des résultats sur quelques exemples symptomatiques de spectres d’opérateurs non auto-adjoints.

    Je montrerai la variante Feshbach sur un cas simple pour voir les différences.

    Pour illustrer l’interaction entre calcul semiclassique et problème de Grushin, je traiterai le cas simple d’une méthode LCAO (Linear Combination of Atomic Orbital) des chimistes pour un problème à2 puits. J’évoquerai ensuite la question du problème des résonancesde formes pour le problème du puits dans une île.

    Enfin, pour préparer une exposé futur, je rappellerai le calculformel de complément de Schur  qui permet de faire le lien entre processus de Langevin et processus de Langevin suramorti dans l’asymptotique des grandes frictions.

  • Nathalie Ayi (LJLL Sorbonne Université), 24 juin at 10:00, Room B211.

    Title and abstract TBD

Past seminars (2023-2024)

  • Cecilia Pagliantini (TU Eindhoven), October 10th, 14:00pm, Room F102.

    Structure-preserving adaptive model order reduction of parametric Hamiltonian systems

    Model order reduction of parametric differential equations aims at constructing low-complexity high-fidelity surrogate models that allow rapid and accurate solutions under parameter variation. The development of reduced order models for Hamiltonian systems is challenged by several factors: (i) failing to preserve the geometric structure encoding the physical properties of the dynamics might lead to instabilities and unphysical behaviors of the resulting approximate solutions; (ii) the slowly decaying Kolmogorov n-width of transport-dominated and non-dissipative phenomena demands large reduced spaces to achieve sufficiently accurate approximations; and (iii) nonlinear operators require hyper-reduction techniques that preserve the gradient structure of the flow velocity. We will discuss how to address these aspects via structure-preserving nonlinear model order reduction. The gist of the proposed method is to adapt in time an approximate low-dimensional phase space endowed with the geometric structure of the full model and to ensure that the hyper-reduced flow retains the physical properties of the original model.

  • Claire Boyer (LPSM, Sorbonne Université), October 12th, 10:00am, Room F202.

    Some statistical insights into PINNs

    Physics-informed neural networks (PINNs) combine the expressiveness of neural networks with the interpretability of physical modeling. Their good practical performance has been demonstrated both in the context of solving partial differential equations and in the context of hybrid modeling, which consists of combining an imperfect physical model with noisy observations. However, most of their theoretical properties remain to be established. We offer some statistical guidelines into the proper use of PINNs.

  • Harold Berjamin, (University of Galway, Ireland), November 8th, 11:00am, Room B211.

    Recent developments on the propagation of mechanical waves in soft solids

    In this talk, I will give an overview of recent results obtained during my postdoctoral fellowships at the University of Galway (Ireland), covering several topics related to wave propagation in soft solids. The broader context of these works is the study of Traumatic Brain Injury, which is a major cause of death and disability worldwide. First, the nonlinear propagation of shear waves in viscoelastic solids will be addressed, including thermodynamic aspects and shock formation. Then, fluid-saturated porous media will be considered. Ongoing and future developments will also be presented.

  • Silvère Bonnabel (Mines Paristech), November 29th, 10:30am, Room B211.

    Wasserstein Gradient Flows for Variational Inference

    In this talk, we will introduce the article [1] and a few extensions. We propose a new method for approximating a posterior probability distribution in Bayesian inference. To achieve this, we offer an alternative to well-established MCMC methods, based on variational inference. Our goal is to approximate the target distribution with a Gaussian distribution, or a mixture of Gaussians, that come with easy-to-compute summary statistics. This approximation is obtained as the asymptotic limit of a gradient flow in the sense of the 2-Wasserstein distance on the space of Gaussian measures (Bures-Wasserstein space). Akin to MCMC, this approach allows for strong convergence guarantees for log-concave target distributions. We will also briefly discuss low-rank implementations for tractability in higher dimensions.

    [1] Variational inference via Wasserstein gradient flows, Marc Lambert, Sinho Chewi, Francis Bach, Silvère Bonnabel, and Philippe Rigollet, NeurIPS 2022

  • Andrea Bertazzi (École Polytechnique), December 7th at 9:30am, Room B211.

    Sampling with (time transformations of) Piecewise deterministic Markov processes

    Piecewise deterministic Markov processes (PDMPs) received substantial interest in recent years as an alternative to classical Markov chain Monte Carlo (MCMC) algorithms. While theoretical properties of PDMPs have been studied extensively, their exact implementation is only possible when bounds on the gradient of the negative log-target can be derived. In the first part of the talk we discuss how to overcome this limitation taking advantage of approximations of PDMPs obtained using splitting schemes. Focusing on the Zig-Zag sampler (ZZS), we show how to introduce a suitable Metropolis adjustment to eliminate the discretisation error incurred by the splitting scheme. In the second part of the talk we study time transformations as a resource to improve the performance of (PDMP-based) MCMC algorithms in the setting of multimodal distributions. For a suitable time transformation, we argue that the process can explore the state space more freely and jump between modes more frequently. Qualitative properties of time transformed Markov process are derived, with emphasis on uniform ergodicity of the time transformed ZZS. We conclude the talk with a proposal on how to make use of this idea taking advantage of our Metropolis adjusted ZZS.

  • Andrew Stuart (Caltech), December 14th, afternoon, Room B211.

    Learning Solution Operators For PDEs: Algorithms, Analysis and Applications [Slides]

    Neural networks have shown great success at approximating functions between spaces X and Y, in the setting where X is a finite dimensional Euclidean space and where Y is either a finite dimensional Euclidean space (regression) or a set of finite cardinality (classification); the neural networks learn the approximator from N data pairs {x_n, y_n}.

    In many problems arising in PDEs it is desirable to learn solution operators: maps between spaces of functions X and Y; here X denotes a function space of inputs to the PDE (such as initial conditions, boundary data, coefficients) and Y denotes the function space of PDE solutions. Such a learned map can provide a cheap surrogate model to accelerate computations.

    The talk overviews the methodology being developed in this field of operator learning and describes analysis of the associated approximation theory. Applications are described to the learning of homogenized constitutive models in mechanics.

  • Luca Nenna (Université Paris-Saclay, délégation MATHERIALS)

    Introduction au transport optimal [Lecture notes]

    All the lectures of this short course will take place in the CERMICS seminar room (B211)

    • Wed 17 Jan from 9h30 to 11h30: Monge and Kantorovich problems
    • Thu 18 Jan from 9:00 to 11:00: Dual problem, optimality conditions, optimal transport maps [Slides]
    • Wed 24 Jan from 9h30 to 11h30: Entropic optimal transport and Sinkhorn algorithm [Slides]
    • Fri 2 Feb from 9h30 to 11h30: A glimpse of multi-marginal OT and applications [Slides]
  • Emma Horton (Unviersity of Warwick), February 1st at 10:00, Room B211. [cancelled]

    Monte Carlo methods for branching processes

    Branching processes naturally arise as pertinent models in a variety of situations such as cell division, population dynamics and nuclear fission. For a wide class of branching processes, it is common that their first moment exhibits a Perron Frobenius-type decomposition. That is, the first order asymptotic behaviour is described by a triple $(\lambda, \varphi, \eta)$, where $\lambda$ is the leading eigenvalue of the system and $\varphi$ and $\eta$ are the corresponding right eigenfunction and left eigenmeasure respectively. Thus, obtaining good estimates of these quantities is imperative for understanding the long-time behaviour of these processes. In this talk, we discuss various Monte Carlo methods for estimating this triple. This talk is based on joint work with Alex Cox (University of Bath) and Denis Villemonais (Université de Lorraine).

  • François Charton (Meta AI), March 1st at 2pm, Room B211.

    Problem solving as a translation task
    Neural architectures designed for machine translation can be used to solve problems of mathematics, by considering that solving amounts to translating the problem, a sentence in some mathematical language, into its solution, another sentence in mathematical language. Presenting examples from symbolic and numerical mathematics, and theoretical physics, I show how such techniques can be applied to develop AI for Science, and help understand the inner workings of language models.

  • Tobias Grafke (University of Warwick), March 22th at 10:00, Room B211.

    Quantifying extreme events in complex systems via sharp large deviations estimates

    Rare and extreme events are notoriously hard to handle in any complex stochastic system: They are simultaneously too rare to be reliably observable in experiments or numerics, but at the same time often too impactful to be ignored. Large deviation theory provides a classical way of dealing with events of extremely small probability, but generally only yields the exponential tail scaling of rare event probabilities. In this talk, I will discuss theory, as well as corresponding algorithms, that improve on this limitation, yielding sharp quantitative estimates of rare event probabilities from a single computation and without fitting parameters. The applicability of this method to high-dimensional real-world systems, for example coming from fluid dynamics or molecular dynamics, are discussed.

  • Thomas Normand (Nantes Université), March 26th at 10:00, Room F103.

    Small eigenvalues and metastability for semiclassical Boltzmann operators

    We consider an inhomogeneous linear Boltzmann equations in a low temperature regime and in the presence of an external force deriving from a potential. We provide a sharp description of the spectrum near 0 of the associated operator. It enables us to obtain some precise information on the long time behavior of the solutions with in particular some quantitative results of return to equilibrium and metastability. This is done by using and adapting some constructions of ‘gaussian quasimodes’ (approximated eigenfunctions) involving some tools from semiclassical microlocal analysis which will provide the desired sharp localization of the small eigenvalues.


Archive of past seminars before 2023: here

Organizers: Amaury Hayat, Urbain Vaes.