Applied Mathematics Seminar

 


Upcoming seminars

20 novembre à 14:00 François Golse B211 Colloquium
28 novembre à 10:00 Maria Laura Delle Monache B211 Seminar
17 décembre à 10:00 Feliks Nüske B211 Seminar
19 décembre à 10:00 Richard Kraaij B211 Seminar
9 janvier à 10:00 Raphaël Barboni TBD Seminar
16 janvier à 10:30 Guillaume Chennetier B211 Seminar
6 mars à 10:00 Eloi Tanguy TBD Seminar
  • Maria Laura Delle Monache (UC Berkeley), Thursday November 28th, 10:00am, Room B211.

    Control Strategies for Mixed Autonomy Traffic: theory, simulations and real-life experiments.

    The recent and rapid emergence of disruptive technologies is dramatically changing how traffic is monitored and managed in our cities. They will contribute to generate new knowledge and capabilities to design and implement innovative transport policies. In this talk, we will show how we can exploit new technologies to improve traffic management. We will focus on control strategies for traffic systems with the aid of small fleets of connected and automated vehicles immersed in human driven traffic flow. We present a class of coupled PDE-ODE models describing the interaction of autonomous vehicles (AVs) with the surrounding traffic. The model consists of a scalar conservation law for the main traffic flow, coupled with ordinary differential equations describing the possibly interacting AV trajectories. We will prove analytically and numerically how the proposed control theory can improve traffic performance and finally, we will present the MegaVanderTest, a test involving 100 connected and automated vehicles (CAVs). The MegaVanderTest is to our knowledge the field test which achieved the largest concentration of CAVs collaboratively controlling traffic on a single stretch of freeway.

  • Feliks Nüske (Max Planck Institute), Tuesday December 17th, 10:00am, Room B211.

    TBD

    TBD

  • Richard Kraaij (TU Delft), Thursday December 19th, 10:00am, Room B211.

    TBD

    TBD

  • Raphaël Barboni (ENS Ulm), Thursday January 9th, 10:00am, Room TBD.

    Understanding the training of infinitely deep and wide ResNets with Conditional Optimal Transport

    We study the convergence of gradient flow for the training of deep neural networks. If Residual Neural Networks are a popular example of very deep architectures, their training constitutes a challenging optimization problem due notably to the non-convexity and the non-coercivity of the objective. Yet, in applications, those tasks are successfully solved by simple optimization algorithms such as gradient descent. To better understand this phenomenon, we focus here on a “mean-field” model of infinitely deep and arbitrarily wide ResNet, parameterized by probability measures over the product set of layers and parameters and with constant marginal on the set of layers. Indeed, in the case of shallow neural networks, mean field models have proven to benefit from simplified loss-landscapes and good theoretical guarantees when trained with gradient flow for the Wasserstein metric on the set of probability measures. Motivated by this approach, we propose to train our model with gradient flow w.r.t. the conditional Optimal Transport distance: a restriction of the classical Wasserstein distance which enforces our marginal condition. We first show the well-posedness of the gradient flow equation and then its local convergence around well-chosen initializations. This is joint work with G.Peyré and F.-X. Vialard.

  • Guillaume Chennetier (CERMICS), Thursday January 16th, 10:30am, Room B211.

    TBD

    TBD

  • Eloi Tanguy (Université Paris-Cité), Thursday March 6th, 10:00am, Room TBD.

    TBD

    TBD


Past seminars (2024-2025)

  • Borjan Geshkovski (Inria MEGAVOLT), October 16th, 15:00pm, Room B211.

    Dynamic metastability in the self-attention model

    The pure self-attention model is a simplification of the celebrated Transformer architecture, which neglects multi-layer perceptron layers and includes only a single inverse temperature parameter. The model exhibits a remarkably similar qualitative behavior across layers to that observed empirically in a pre-trained Transformer. Viewing layers as a time variable, the self-attention model can be interpreted as an interacting particle system on the unit sphere. We show that when the temperature is sufficiently high, all particles collapse into a single cluster exponentially fast. On the other hand, when the temperature falls below a certain threshold, we show that although the particles eventually collapse into a single cluster, the required time is at least exponentially long. This is a manifestation of dynamic metastability: particles remain trapped in a “slow manifold” consisting of several clusters for exponentially long periods of time. Our proofs make use of the fact that the self-attention model can be written as the gradient flow of a specific interaction energy functional previously found in combinatorics.

  • Olivier Zahm (Inria AIRSEA), October 28th, 10:00am, Room B211.

    Preconditioning Langevin dynamics via optimal Riemannian Poincaré inequalities

    The Poincaré inequality is a key property for the convergence analysis of many practical algorithms, including MCMC samplers, dimension reduction methods etc. In this talk, we introduce a Riemannian version of the Poincaré inequality where a positive definite weighting matrix field (i.e. a Riemannian metric) is introduced to improve the Poincaré constant, and therefore the convergence speed of the resulting preconditioned Langevin dynamics. By leveraging the notion of *moment measure*, we prove the existence of an optimal metric which yields a Poincaré constant of 1. This optimal metric turns out to be a *Stein kernel*, offering a novel perspective on these complex but central mathematical objects that are hard to obtain in practice. We also present an implementable optimization algorithm to numerically obtain the optimal metric. The method’s effectiveness is illustrated through simple but non-trivial examples which reveals rather complex solutions. Lastly, we show how to design efficient Langevin-based sampling schemes which enables rapid jump across various modes and tails of the measure to be sampled from.

  • Theron Guo (MIT, visiting MATHERIALS in October and November), November 7th, 10:00am, Room B211.

    Model order reduction for computational homogenization in nonlinear solid mechanics

    Computational homogenization has become an indispensable method to establish the effective properties of microstructures and efficiently solve multiscale problems in solid mechanics. However, the resulting two-scale problem remains computationally expensive for nonlinear problems and is typically infeasible in multi-query contexts, such as optimization or uncertainty quantification. To alleviate the high computational costs, model order reduction techniques can be used. In this talk, I will introduce different variants of computational homogenization, and illustrate the effectiveness of projection-based model order reduction for two variants.


Archive of past seminars before 2024: here

Organizers: Loucas Pillaud-Vivien, Urbain Vaes.