Decision, Algorithms and Geometry

Joint seminar of the Optimization team of CERMICS (Ecole des Ponts
Paristech) and the Tropical team (INRIA Saclay & CMAP, Ecole
Polytechnique)


Forthcoming sessions

 

 

 

April 21st, Ecole Polytechnique

Lionel Pournin (LIPN, Paris XIII)

Algorithmic, combinatorial, and geometric aspects of linear
optimization

The simplex and interior point methods are currently the most
computationally successful algorithms for linear optimization. While
simplex methods follow an edge path, interior point methods follow the
central path. The complexity of these methods is closely related to the
combinatorial and geometric structures of the feasible region. Focusing
on the analysis of worst-case constructions leading to computationally
challenging instances, recently obtained lower bounds on the largest
possible diameter of lattice polytopes will be presented. These bounds,
via lattice zonotopes, are established using tools from combinatorics
and number theory. This talk is based on joint work with Antoine Deza.

Past sessions

March 24th, 10:00, Ecole des Ponts, F102

Bernardo da Costa (UFRJ)

Risk Budgeting Portfolios from Simulations

Les fonds de pension doivent poursuivre une stratégie
d’investissement qui soit à la fois prudente et économiquement
rentable. Une stratégie de construction de tels portefeuilles est
donnée par les portefeuilles à “budget de risque” (Risk Budgeting
Portfolios – RBP). Leur caractéristique principale est de diversifier
selon la contribution de chaque actif au risque total du portefeuille.
Ils sont donc idéaux pour des fonds de pension qui sont soumis à des
contraintes d’investissement assez rigides.

La plupart des techniques de construction pour de tels portefeuilles
part de l’hypothèse (assez forte) que la rentabilité des actifs est
donnée par une variable aléatoire Gaussienne multivariée. Après avoir
introduit les concepts de mesure de risque utilisés, on présentera un
problème d’optimisation dont la solution permet de déduire le
portefeuille désiré. Ensuite, on montrera quelques techniques
d’optimisation stochastique qui permettent de construire des
portefeuilles à budget de risque d’après une distribution arbitraire
des retours. Pour des mesures de risque cohérentes, comme l’AVAR, le
modèle résultant est convexe, ce qui permet une implémentation très
efficace, même pour un nombre assez large de scénarios.

Antoine Oustry (LIX – CNRS)

ACOPF: Nonsmooth optimization to improve the computation of SDP bounds

Antoine Oustry, joint work with Claudia D’Ambrosio (LIX-CNRS), Leo Liberti (LIX-CNRS) and Manuel Ruiz (RTE).
The Alternating-Current Optimal Power Flow (ACOPF) problem models the optimization of power dispatch in an AC electrical network.  In the quest for global optimality certificates, the semidefinite programming (SDP) relaxation of the ACOPF problem is a major asset since it is known to produce tight lower bounds. To improve the scalability of the SDP relaxation, state-of-the-art approaches exploit the sparse structure of the power grid by using a clique decomposition technique. Despite this advance, the clique-based SDP relaxation remains difficult to solve for large-scale instances: numerical instabilities may arise when solving this convex optimization problem. These difficulties cause two issues (i) inaccuracy and (ii) lack of certification.
We tackle both issues with an original approach. We reformulate the Lagrangian dual of the ACOPF, whose value equals the value of the SDP relaxation, as a concave maximization problem with the following properties: (i) it is unconstrained (ii) the objective function is partially separable. Based on this new formulation, we present how to obtain a certified lower bound from any dual vector, whether feasible or not in the classical dual SDP problem. Our new formulation is solved with a tailored polyhedral bundle method exploiting the structure of the problem. We use this algorithm as a post-processing step, after solving the SDP relaxation with the state-of-the-art commercial interior point solver MOSEK. For many instances from the PGLib-OPF library, this post-processing significantly reduces the computed optimality gap.

Huyen Pham (Paris-Diderot)

December 16th, 10:30, Ecole Polytechnique (salle Jean Lascoux, rez de chaussée)

Optimal bidding strategies for digital advertising

With the emergence of new online channels and information technology, digital advertising  tends to substitute more and more to traditional advertising by offering the opportunity to companies to
target  the consumers/users that  are really interested by their products or services. We introduce a novel framework for the study of optimal bidding strategies associated to different types of advertising, namely, commercial advertising for triggering purchases or subscriptions, and social marketing for alerting population  about unhealthy behaviours (anti-drug, vaccination, road-safety campaigns). Our  continuous-time models are based on a common framework encoding users online behaviours via their web-browsing at random times, and the targeted advertising auction mechanism widely used on Internet,  the objective being  to efficiently diffuse advertising information by means of digital channels. Our main results are to provide semi-explicit formulas for the optimal value and bidding policy for each of these problems.
We show  some sensitivity properties of the solution with respect to model parameters, and analyse how the different sources  of digital information  accessible to users including  the social interactions affect the optimal bid  for advertising auctions. We also study how to efficiently combine targeted adver\-tising and non-targeted advertising mechanisms. Finally, some  classes of examples with fully explicit formulas are derived.
Joint work with Médéric Motte (Université de Paris)

 

Welington de Oliveira (ENSMP, Sophia-Antipolis) Slides

November 18th, 11:00, Ecole des Ponts (CERMICS Seminar room)

Sequential Difference-of-Convex Programming

A function is called DC if it is expressible as the difference of two convex functions. In the first part of this talk, we present a short tutorial on DC programming and highlight important facts about DC functions and optimality conditions.

Optimization methods for DC programs iteratively solve convex sub-problems to define iterates. Although convex, depending on the problem’s structure, these subproblems are very often challenging and require specialized solvers. In the second part of this talk, we will focus on a recent class of algorithms that defines iterates as approximate critical points of significantly easier DC subproblems approximating the original one. Several algorithms can be designed from the presented approach since there is considerable freedom to choose such more accessible subproblems. In some cases, the resulting algorithm boils down to a straightforward process with iterates given in an analytic form.

This talk is based on the following two papers:

W. de Oliveira The ABC of DC Programming. Set-Valued and Variational Analysis, v. 28, pp. 679-706, 2020.

W. de Oliveira. Sequential Difference-of-Convex Programming. Journal of Optimization Theory and Applications, v.186 (3), pp. 936-959, 2020.

 

Adrien Lefranc, CERMICS, Efficacity

November 18th, 14:00, Ecole des Ponts (CERMICS Seminar room)

Numerical methods in generalized convexity

We present potential applications of  the so-called one-sided linear couplings –
a class that encompasses the Fenchel coupling of (standard) convex analysis.
We start by extending the mirror descent algorithm. Then, turning to the Capra (constant along primal rays) coupling as a particular case, we provide explicit formulations for the Capra subdifferential of the l0 pseudonorm.
Lastly, we discuss the difficulties that arise when trying to use Capra convexity to solve sparse optimization problems, and introduce some numerical perspectives.
All-in-all, we contribute to an original viewpoint on sparse optimization, whose applications in statistics and signal processing have a huge impact on all engineering fields.

Yves Achdou (LJLL & INRIA, Paris)

October 14st, 10:30, Ecole des Ponts (CERMICS Seminar room)

Jeux à champ moyen: un survol

  1. La théorie des jeux à champ moyen a été introduite en 2006 par JM. Lasry et PL. Lions pour décrire des jeux différentiels (équilibres de Nash) dans la limite où le nombre de joueurs tend vers l’infini. Cette théorie a depuis connu un essor considérable. Elle constitue un point de rencontre de plusieurs domaines des mathématiques appliquées: théorie des jeux, contrôle optimal déterministe ou stochastique, calcul des variations, transport optimal,  analyse des EDPs, méthodes numériques. Les applications sont nombreuses: économie, étude des comportements collectifs avec anticipations rationnelles, etc…On tentera de  donner un aperçu  de la théorie des jeux à champ moyen, en insistant sur plusieurs aspects: La master equation est une équation aux dérivées partielles non linéaire dont l’inconnue est une fonction agissant en particulier sur une mesure de probabilité. C’est donc une EDP posée en dimension infinie. Elle  permet d’une part d’analyser le passage à la limite quand le nombre d’agents tend vers l’infini, et est d’autre part incontournable quand les agents sont soumis à un bruit commun. Quand l’espace des états est fini, la master equation est une équation posée en dimension finie mais possiblement grande, pour laquelle des simulations numériques sont parfois possibles.
  2. Dans les cas les plus simples, les trajectoires caractéristiques de la master equation sont obtenues en résolvant des systèmes d’EDPs forward-backward posées dans l’espace des états. Il couplent une équation de Hamilton-Jacobi donnant la stratégie des joueurs à une équation de Fokker-Planck permettant de caractériser l’évolution de la distribution des états. L’analyse des ces systèmes d’EDPs a fait l’objet d’un important travail de recherche depuis quinze ans.
  3. Les solutions explicites ou semi-explicites des systèmes forward-backward étant très rares, il est crucial, s’il on veut utiliser les jeux à champ moyen à des fins prédictives, de disposer de méthodes numériques robustes. On présentera rapidement les schémas aux différences finies utilisés, de srésultats de convergence,  ainsi que des exemples de simulations dans des modèles de trafic ou de mouvement de piétons.Si le temps le permet, on abordera des travaux en cours utilisant des réseaux de neurones profonds pour simuler numériquement des master equations en dimension grande.

Guillaume Dalle (CERMICS)

October 14st, 14:30, Ecole des Ponts (CERMICS Seminar room)

Understanding railway delay propagation through latent variable models (slides)

Railway networks are very unstable systems: a single delay caused by an external incident can set off a chain reaction that ends up paralyzing an entire line. Unfortunately, we rarely have complete information about resource conflicts between trains, which makes delay propagation hard to predict. To overcome this issue, we introduce a new model based on a latent congestion variable, which lives on the network graph and evolves according to a Vector AutoRegressive process $X_t = \theta X_{t-1} + \varepsilon_t$.
Since we only get partial and noisy observations of $X$, the estimation of $\theta$ can be challenging: depending on the density of traffic and the locality of propagation phenomena, how much information can we recover? We provide two complementary answers to this question: an information-theoretic minimax lower bound, and the non-asymptotic analysis of a specific sparse estimator. Combining both of them yields an optimal convergence rate for this statistical problem, which is validated on simulated data and put to the test on real railway logs.

Gabriel Peyré (CNRS & Ecole Normale Supérieure, Paris)

July 2nd, 10:30, Ecole Polytechnique (CMAP Seminar room)
The seminar will also be accessible online

Scaling Optimal Transport for High dimensional Learning

Abstract:
Optimal transport (OT) has recently gained lot of interest in machine learning. It is a natural tool to compare in a geometrically faithful way probability distributions. It finds applications in both supervised learning (using geometric loss functions) and unsupervised learning (to perform generative model fitting). OT is however plagued by the curse of dimensionality, since it might require a number of samples which grows exponentially with the dimension. In this talk, I will explain how to leverage entropic regularization methods to define computationally efficient loss functions, approximating OT with a better sample complexity. More information and references can be found on the website of our book “Computational Optimal Transport

Eloise Berthier (INRIA & Ecole Normale Supérieure, Paris)

July 2nd, 14:00, Ecole Polytechnique (Seminar room of CMAP)

Max-Plus Linear Approximations for Deterministic Continuous-State Markov

Abstract :
We consider deterministic continuous-state Markov decision processes
(MDPs). We apply a max-plus linear method to approximate the value function
with a specific dictionary of functions that leads to an adequate
state-discretization of the MDP. This is more efficient than a direct
discretization of the state space, typically intractable in high dimension. We
propose a simple strategy to adapt the discretization to a problem instance,
thus mitigating the curse of dimensionality. We provide numerical examples
showing that the method works well on simple MDPs.

Maël Forcier (CERMICS,  Ecole des Ponts)

July 2nd, 14:45, Ecole Polytechnique (CMAP Seminar room)

Multistage Stochastic Linear Programming and Polyhedral Geometry

Abstract:
We show that the multistage linear problem (MSLP) with an arbitrary cost distribution is equivalent to a MSLP on a finite scenario tree.
Indeed, be studying the polyhedral structure of MSLP, we show that the cost distribution can be replaced by a discrete cost distribution, taking the expected value of the cost at each cone of a normal fan.
In particular, we show that the expected cost-to-go functions are polyhedral and affine on the cells of a chamber complex, which is independent
of the cost distribution. This leads to new complexity results, showing that MSLP is fixed-parameter tractable.
A dual approach in terms of fiber polyhedron is eventually presented to derive algorithms for two-time scale MSLP.

 

Kazuo Murota (The Institute of Statistical Mathematics, Tokyo)

May 20th, 10:00, Online

Multiple exchange property of gross substitutes valuations
with a proof by discrete convex analysis (Slides)

Abstract:
Discrete convex analysis (DCA) offers a general framework of discrete optimization,
combining the ideas from matroid theory and convex analysis.  It has found applications
in many different areas including operations research, mathematical economics, and game
theory. The interaction between DCA and mathematical economics was initiated by
Danilov, Koshevoy, and Murota (2001), and accelerated by the crucial observation of Fujishige
and Yang (2003) that M-natural-concavity is equivalent to the gross substitutes (GS) property
of Kelso and Crawford (1982).

In this talk we show how an old question in economics was settled with
the DCA machinery. More concretely, we explain how the equivalence of the gross substitutes
condition to the strong no complementarities (SNC) condition of Gul and Stacchetti
(1999) can be proved with the use of the Fenchel-type duality theorem and the conjugacy
theorem in DCA.
The SNC condition means the multiple exchange property of a set function
f, saying that, for two subsets X and Y and a subset I of X-Y, there exists a subset J
of Y-X such that  f((X-I)+J) +f((Y-J)+I) is not smaller than f(X) + f(Y).

This talk is intended to offer a glimpse at a technical interaction
between DCA and economics.
No preliminary knowledge of DCA is required.

Guillaume Vigeral (Ceremade, Paris Dauphine)

April 8th, 10:30, Online

Structure of the sets of Nash equilibria of finite games; applications
to the complexity of some decision problems in game theory (Slides)

Abstract:
The set of Nash equilibrium payoffs of a finite game is always non
empty, compact and semialgebraic. We show that, for 3 players or more,
the reverse also holds: given E a subset of R^N that is non empty,
compact and semialgebraic, one constructs a finite N player game such
that E is its set of equilibrium payoffs. Related results also holds
when one consider only games with integral payoffs, or when the focus is
on equilibria instead of equilibrium payoffs.
We apply this result to understand the complexity class of some natural
decision problems on finite games.