Schedule.
Monday
9:00a – 10:30a: Sébastien Bubeck
11:00a – 12:30p: Sébastien Bubeck
3:00p – 4:30p: Yinyu Ye
Tuesday
9:00a – 10:30a: Sébastien Bubeck
11:00a – 12:30p: Yinyu Ye
2:30p – 4:00p: Yinyu Ye
Wednesday
9:00a – 10:30a: Sébastien Bubeck
11:00a – 12:30p: Yinyu Ye
2:30p – 10:15p: Social Event
Thursday
9:00a – 10:30a: Andrea Lodi
11:00a – 12:30p: Yinyu Ye
5:00p – 6:40p: Talks by the students
8:00p: Conference dinner
Friday
09:00a – 10:30a: Andrea Lodi
11:00a – 12:30p: Andrea Lodi
Program.
Sébastien Bubeck (Microsoft Research): Introduction to statistical learning theory (4 lectures of 1:30)
In this mini-course we will cover the basics of statistical learning theory, from the 80s to today. A rough outline is as follows:
- Control of generalization error via (i) Vapnik-Chervonenkis dimension, (ii) Rademacher complexity, (iii) stability arguments, and (iv) regularization.
- When optimization comes into the picture: stochastic gradient descent and convex learning.
- Margin theory and Boosting. Kernel machines and some recent attempts to make them computationally efficient.
- Neural networks: what we know and what we don’t know about them.
(Introduction slides)
In this series of lectures I review a few of applications on Big Data that I personally like and I try to explain my point of view as a Mathematical Optimizer — especially concerned with discrete (integer) decisions — on the subject. I advocate a tight integration of Machine Learning and Mathematical Optimization (among others) to deal with the challenges of decision-making in Data Science. For such an integration I try to answer three questions:
- What can optimization do for machine learning?
- What can machine learning do for optimization?
- Which new applications can be solved by the combination of machine learning and optimization?
We present few decision/optimization problems driven by online, uncertain and massive data. In this short course, we show how analytical decision models and numerical algorithms can be used to achieve solution efficiency and optimality, where topics include
- Dynamic Pricing and Online Combinatorial Auction using online linear programming technologies (project)
- Distributionally robust models and algorithms for stochastic optimization and learning, (project)
- Complexity issues and sample Strategies in various convex and nonconvex optimization algorithms, (project)
- Graph Realization, Sensor Network Localization and Dimension Reduction using Semidefinite Programming technologies, (project)
- Service location/partition based on geographic data, where we provide a fast algorithm to partition a convex region into multiple sub-regions such that each piece is convex has two density measurements equalized.
- 5:00p – 5:20p, Mathieu Tanneau, “A Reinforcement-Learning Framework for Symmetric Reordering in Sparse Cholesky Factorization“
- 5:20p – 5:40p, Thi Thuy Nga Nguyen, “Scheduling for moving vehicles in a cellular network“
- 5:40p – 6:00p, Cristina Molero-Río, “Optimal Randomized Classification Trees“
- 6:00p – 6:20p, Oliver Hinder, “A polynomial time interior point method for problems with nonconvex constraints“
- 6:20p – 6:40p, Antoine Prouvost, “On the methodology of machine learning for combinatorial optimization“