Applied Mathematics Seminar

Upcoming seminars

24 juin à 10:00 Nathalie Ayi B211
24 juin à 11:00 Nicolai Gerber B211
  • Nathalie Ayi (LJLL Sorbonne Université), June 24th at 10:00, Room B211.

    Limite en grande population de systèmes de particules en interaction sur des graphes à poids

    Lorsque l’on s’intéresse à des systèmes de particules en interaction, deux catégories distinctes émergent : les systèmes indistinguables, dans lesquels l’identité des particules n’influence pas la dynamique du système, et les systèmes non échangeables, dans lesquels l’identité des particules joue un rôle important. Une façon de conceptualiser ces seconds systèmes est de les considérer comme des systèmes de particules posés sur des graphes à poids. Dans cet exposé, nous nous concentrons sur cette dernière catégorie. De récents développements dans la théorie des graphes ont suscité un regain d’intérêt pour la compréhension de limites en grande population de ces systèmes. Deux approches principales ont émergé : les limites de graphes et les limites de champ moyen. Alors que les limites de champ moyen ont traditionnellement été introduites pour des particules indistinguables, elles ont récemment été étendues au cas des particules non échangeables. Dans cette présentation, nous introduisons plusieurs modèles, principalement issus du domaine de la dynamique d’opinions, pour lesquels des résultats de convergence rigoureux lorsque N tend vers l’infini ont été obtenus. Nous clarifions également le lien entre l’approche de limite de graphe et celle de limite de champ moyen. Les travaux discutés sont issus de plusieurs articles, co-écrits entre autres avec Nastassia Pouradier Duteil et David Poyato.

  • Nicolai Gerber (LJLL Sorbonne Université), June 24th at 11:00, Room B211.

    Mean-field limits for Consensus-based Optimization and Sampling

    For algorithms based on interacting particle systems and admitting a mean-field description, convergence analysis is often more accessible at the mean-field level. To transpose convergence results obtained at the mean-field level to the finite ensemble size setting, it is desirable to show that the particle dynamics converge in an appropriate sense to the corresponding mean-field dynamics. This talk discusses a recent joint work with Franca Hoffmann and Urbain Vaes that proves quantitative mean-field limits for two related interacting particle systems: Consensus-Based Optimization and Consensus-Based Sampling. The approach generalizes Sznitman’s classical argument: to circumvent the lack of global Lipschitz continuity of the coefficients, we discard an event of small probability, the contribution of which is controlled using moment estimates for the particle systems. In addition, the paper presents new results on the well-posedness of the particle systems and their mean-field limit and provides novel stability estimates for the weighted mean and the weighted covariance.

Past seminars (2023-2024)

  • Cecilia Pagliantini (TU Eindhoven), October 10th, 14:00pm, Room F102.

    Structure-preserving adaptive model order reduction of parametric Hamiltonian systems

    Model order reduction of parametric differential equations aims at constructing low-complexity high-fidelity surrogate models that allow rapid and accurate solutions under parameter variation. The development of reduced order models for Hamiltonian systems is challenged by several factors: (i) failing to preserve the geometric structure encoding the physical properties of the dynamics might lead to instabilities and unphysical behaviors of the resulting approximate solutions; (ii) the slowly decaying Kolmogorov n-width of transport-dominated and non-dissipative phenomena demands large reduced spaces to achieve sufficiently accurate approximations; and (iii) nonlinear operators require hyper-reduction techniques that preserve the gradient structure of the flow velocity. We will discuss how to address these aspects via structure-preserving nonlinear model order reduction. The gist of the proposed method is to adapt in time an approximate low-dimensional phase space endowed with the geometric structure of the full model and to ensure that the hyper-reduced flow retains the physical properties of the original model.

  • Claire Boyer (LPSM, Sorbonne Université), October 12th, 10:00am, Room F202.

    Some statistical insights into PINNs

    Physics-informed neural networks (PINNs) combine the expressiveness of neural networks with the interpretability of physical modeling. Their good practical performance has been demonstrated both in the context of solving partial differential equations and in the context of hybrid modeling, which consists of combining an imperfect physical model with noisy observations. However, most of their theoretical properties remain to be established. We offer some statistical guidelines into the proper use of PINNs.

  • Harold Berjamin, (University of Galway, Ireland), November 8th, 11:00am, Room B211.

    Recent developments on the propagation of mechanical waves in soft solids

    In this talk, I will give an overview of recent results obtained during my postdoctoral fellowships at the University of Galway (Ireland), covering several topics related to wave propagation in soft solids. The broader context of these works is the study of Traumatic Brain Injury, which is a major cause of death and disability worldwide. First, the nonlinear propagation of shear waves in viscoelastic solids will be addressed, including thermodynamic aspects and shock formation. Then, fluid-saturated porous media will be considered. Ongoing and future developments will also be presented.

  • Silvère Bonnabel (Mines Paristech), November 29th, 10:30am, Room B211.

    Wasserstein Gradient Flows for Variational Inference

    In this talk, we will introduce the article [1] and a few extensions. We propose a new method for approximating a posterior probability distribution in Bayesian inference. To achieve this, we offer an alternative to well-established MCMC methods, based on variational inference. Our goal is to approximate the target distribution with a Gaussian distribution, or a mixture of Gaussians, that come with easy-to-compute summary statistics. This approximation is obtained as the asymptotic limit of a gradient flow in the sense of the 2-Wasserstein distance on the space of Gaussian measures (Bures-Wasserstein space). Akin to MCMC, this approach allows for strong convergence guarantees for log-concave target distributions. We will also briefly discuss low-rank implementations for tractability in higher dimensions.

    [1] Variational inference via Wasserstein gradient flows, Marc Lambert, Sinho Chewi, Francis Bach, Silvère Bonnabel, and Philippe Rigollet, NeurIPS 2022

  • Andrea Bertazzi (École Polytechnique), December 7th at 9:30am, Room B211.

    Sampling with (time transformations of) Piecewise deterministic Markov processes

    Piecewise deterministic Markov processes (PDMPs) received substantial interest in recent years as an alternative to classical Markov chain Monte Carlo (MCMC) algorithms. While theoretical properties of PDMPs have been studied extensively, their exact implementation is only possible when bounds on the gradient of the negative log-target can be derived. In the first part of the talk we discuss how to overcome this limitation taking advantage of approximations of PDMPs obtained using splitting schemes. Focusing on the Zig-Zag sampler (ZZS), we show how to introduce a suitable Metropolis adjustment to eliminate the discretisation error incurred by the splitting scheme. In the second part of the talk we study time transformations as a resource to improve the performance of (PDMP-based) MCMC algorithms in the setting of multimodal distributions. For a suitable time transformation, we argue that the process can explore the state space more freely and jump between modes more frequently. Qualitative properties of time transformed Markov process are derived, with emphasis on uniform ergodicity of the time transformed ZZS. We conclude the talk with a proposal on how to make use of this idea taking advantage of our Metropolis adjusted ZZS.

  • Andrew Stuart (Caltech), December 14th, afternoon, Room B211.

    Learning Solution Operators For PDEs: Algorithms, Analysis and Applications [Slides]

    Neural networks have shown great success at approximating functions between spaces X and Y, in the setting where X is a finite dimensional Euclidean space and where Y is either a finite dimensional Euclidean space (regression) or a set of finite cardinality (classification); the neural networks learn the approximator from N data pairs {x_n, y_n}.

    In many problems arising in PDEs it is desirable to learn solution operators: maps between spaces of functions X and Y; here X denotes a function space of inputs to the PDE (such as initial conditions, boundary data, coefficients) and Y denotes the function space of PDE solutions. Such a learned map can provide a cheap surrogate model to accelerate computations.

    The talk overviews the methodology being developed in this field of operator learning and describes analysis of the associated approximation theory. Applications are described to the learning of homogenized constitutive models in mechanics.

  • Luca Nenna (Université Paris-Saclay, délégation MATHERIALS)

    Introduction au transport optimal [Lecture notes]

    All the lectures of this short course will take place in the CERMICS seminar room (B211)

    • Wed 17 Jan from 9h30 to 11h30: Monge and Kantorovich problems
    • Thu 18 Jan from 9:00 to 11:00: Dual problem, optimality conditions, optimal transport maps [Slides]
    • Wed 24 Jan from 9h30 to 11h30: Entropic optimal transport and Sinkhorn algorithm [Slides]
    • Fri 2 Feb from 9h30 to 11h30: A glimpse of multi-marginal OT and applications [Slides]
  • Emma Horton (Unviersity of Warwick), February 1st at 10:00, Room B211. [cancelled]

    Monte Carlo methods for branching processes

    Branching processes naturally arise as pertinent models in a variety of situations such as cell division, population dynamics and nuclear fission. For a wide class of branching processes, it is common that their first moment exhibits a Perron Frobenius-type decomposition. That is, the first order asymptotic behaviour is described by a triple $(\lambda, \varphi, \eta)$, where $\lambda$ is the leading eigenvalue of the system and $\varphi$ and $\eta$ are the corresponding right eigenfunction and left eigenmeasure respectively. Thus, obtaining good estimates of these quantities is imperative for understanding the long-time behaviour of these processes. In this talk, we discuss various Monte Carlo methods for estimating this triple. This talk is based on joint work with Alex Cox (University of Bath) and Denis Villemonais (Université de Lorraine).

  • François Charton (Meta AI), March 1st at 2pm, Room B211.

    Problem solving as a translation task
    Neural architectures designed for machine translation can be used to solve problems of mathematics, by considering that solving amounts to translating the problem, a sentence in some mathematical language, into its solution, another sentence in mathematical language. Presenting examples from symbolic and numerical mathematics, and theoretical physics, I show how such techniques can be applied to develop AI for Science, and help understand the inner workings of language models.

  • Tobias Grafke (University of Warwick), March 22th at 10:00, Room B211.

    Quantifying extreme events in complex systems via sharp large deviations estimates

    Rare and extreme events are notoriously hard to handle in any complex stochastic system: They are simultaneously too rare to be reliably observable in experiments or numerics, but at the same time often too impactful to be ignored. Large deviation theory provides a classical way of dealing with events of extremely small probability, but generally only yields the exponential tail scaling of rare event probabilities. In this talk, I will discuss theory, as well as corresponding algorithms, that improve on this limitation, yielding sharp quantitative estimates of rare event probabilities from a single computation and without fitting parameters. The applicability of this method to high-dimensional real-world systems, for example coming from fluid dynamics or molecular dynamics, are discussed.

  • Thomas Normand (Nantes Université), March 26th at 10:00, Room F103.

    Small eigenvalues and metastability for semiclassical Boltzmann operators

    We consider an inhomogeneous linear Boltzmann equations in a low temperature regime and in the presence of an external force deriving from a potential. We provide a sharp description of the spectrum near 0 of the associated operator. It enables us to obtain some precise information on the long time behavior of the solutions with in particular some quantitative results of return to equilibrium and metastability. This is done by using and adapting some constructions of ‘gaussian quasimodes’ (approximated eigenfunctions) involving some tools from semiclassical microlocal analysis which will provide the desired sharp localization of the small eigenvalues.

  • Francis Nier (Université de Paris 13, délégation MATHERIALS)

    Problème de Grushin et autres techniques spectrales basées sur le complément de Schur

    • Part I: 28 Mars from 9:30AM to 11:30AM, Room B211 [Slides]
    • Part II: 24 May from 9:30AM to 11:30AM, Room B211 [Slides]

    La méthode dite du problème de Grushin, dont le nom et les notations ont été fixés dans les premiers travaux de J. Sjöstrand (autour de 1970), est un outil maintenant très commun pour comprendre et étudier finement des problématiques d’asymptotique spectrale. En tant que méthode s’appuyant sur la formule du complément de Schur ila des liens avec la méthode de Feshbach populaire en physique mathématique ou les techniques dites de Lyapunov-Schmidt en systèmes dynamiques ou EDP non linéaires. L’approche du problème de Grushin est stable par perturbation. Combinée avec du calcul pseudodifférentiel, semiclassique et des découpages (micro)locaux, elle fournit une approche générale pour ramener des problèmes spectraux multi-échelles à des modèles calculables. En ce sens cette approche permet souvent de donner une formalisation mathématique précise de l’intuition des physiciens.

    Je commencerai par une présentation élémentaire dont une application rapide est la théorie de Fredholm et la théorie de Fredholm holomorphe. J’en profiterai pour rappeler des résultats sur quelques exemples symptomatiques de spectres d’opérateurs non auto-adjoints.

    Je montrerai la variante Feshbach sur un cas simple pour voir les différences.

    Pour illustrer l’interaction entre calcul semiclassique et problème de Grushin, je traiterai le cas simple d’une méthode LCAO (Linear Combination of Atomic Orbital) des chimistes pour un problème à2 puits. J’évoquerai ensuite la question du problème des résonancesde formes pour le problème du puits dans une île.

    Enfin, pour préparer une exposé futur, je rappellerai le calculformel de complément de Schur  qui permet de faire le lien entre processus de Langevin et processus de Langevin suramorti dans l’asymptotique des grandes frictions.

  • Thomas Borsoni (LJLL, Sorbonne Université), Wednesday May 29st at 10:00, Room B211.

    Equivalence between classical and fermionic Boltzmann entropy functional inequalities

    In the context of the Boltzmann equation, functional inequalities relating entropy dissipation and relative entropy to equilibrium are fundamental to obtaining explicit rates of relaxation to equilibrium.

    In this talk, I present a method of transfer of inequalities, which establishes an (almost) equivalence, regarding entropy inequalities, between the classical and the fermionic Boltzmann cases. We thus obtain a large class of such inequalities in the fermionic case, and therefore, quantitative relaxation rates towards equilibrium for solutions to the (homogeneous cut-off hard potentials) Boltzmann-Fermi-Dirac equation.

  • Xujia Zhu (CentraleSupélec), May 31st at 14:00, Room B211.

    A Comprehensive Overview and Recent Advances in Surrogate Modeling for Stochastic Simulators

    Over the past decades, surrogate models have been extensively developed to facilitate uncertainty quantification analysis of complex systems. Significant efforts have been focused on surrogate modeling of deterministic models, where each set of input values corresponds to a unique output. In contrast, stochastic simulators yield different model responses when evaluated twice with the same input. Due to such a stochastic nature, conventional surrogate models developed for deterministic models could not be applied directly to the emulation of stochastic simulators.

    This talk will provide a comprehensive overview of various approaches to constructing stochastic emulators, encompassing methods in statistics, machine learning, and multidisciplinary fields. In the second part, the presentation will cover two surrogate models, the generalized lambda model and stochastic polynomial chaos expansions, developed to emulate the entire response distribution of stochastic simulators.

  • Michel de Lara (CERMICS), June 4th from 13:00 to 17:00, Room B211.

    Tutorial: Causality in Decentralized Control

    In decentralized control, decision-makers (DMs) make moves at stages that are possibly not fixed in advance (as would be in classic sequential control), but that may depend on Nature, on a common clock and on other DMs’ moves. How do you express what a decision-maker knows when making a decision? In absence of a common clock, how do you mathematically represent nonanticipativity, that is, the property that what a DM knows cannot depend on the moves of “future” DMs?

    In the seventies, H. S. Witsenhausen used agents, a product set and a product sigma-field to define the so-called intrinsic model in multi-agent stochastic control problems. In such a model, the information of an agent (DM) is represented by a subfield of the product sigma-field. Within this framework, Witsenhausen proposed a definition of causality.

    In this three hour tutorial, I will present the Witsenhausen intrinsic model. I will provide many illustrations, and discuss classification of information structures. I will outline the potential of the model for game theory with incomplete information.

  • Hugo Touchette (Stellenbosch University)

    Cours sur la théorie des grandes déviations

    • Part I: Tuesday June 4th 9:30AM to 11:30AM, Room B211: Théorie des grandes déviations en général
    • Part II: Tuesday June 11th from 9:30AM to 11:30AM, Room B211: Grandes déviations pour les processus de Markov
  • Hugo Touchette (Stellenbosch University), Thursday June 13th at 10:00, Room B211

    Séminaire: Grandes déviations de l’aire stochastique de Lévy


Archive of past seminars before 2023: here

Organizers: Amaury Hayat, Urbain Vaes.