The OATMIL project

Bringing Optimal Transport and Machine Learning Together

Objectives and scientific standpoint

The OATMIL project will propose novel concepts, methodologies, and new tools for exploiting large data collections. This will result from a cross-fertilization of fundamental tools and ideas from optimal transport (OT) and machine learning (ML)

Optimal transport

Introduced by Gaspard Monge in the 19th century, optimal transport (OT) reappeared in the work of Kantorovitch and recently found surprising new developments in a wide range of fields, including computational fluid mechanics, color transfer between multiple images or warping in the context of image processing, interpolation schemes or distance computation on meshes in computer graphics, and economics, via matching and equilibrium problems. Fundamentally, OT provides means of defining distances between probability measures defined in potentially high-dimensional metric spaces. These distances, which have received several names in the literature (Wasserstein, Monge-Kantorovich or Earth Mover distances), have strong and important properties: i) they can be evaluated when only empirical measures of the distributions are observed, and do not require estimating parametric or semi-parametric distributions as a preprocess; ii) they can exploit the geometry of the underlying metric space, and provide meaningful distances even when the supports of the distributions do not overlap.

Optimal Transport of two probability measures

ML exploits the empirical distribution of some training data to adapt the parameters of certain algorithms (classifiers, regressors, etc.) and perform some prescribed tasks. While computing dis- tances between (empirical) probabilities is very appealing for ML, the use of OT distances is still in its infancy mainly due to the high computational cost induced by solving for the optimal transportation plan. Recently, new computing strategies have emerged (such as entropy-regularized or Sinkhorn transport) that turn OT distances into more tractable tools. ML applications are emerging for complex problems such as domain adaptation, semi-supervised learning, factorial analysis or as data fitting term for multiclass classifiers.

Example of Domain Adaptation with Optimal Transport (JDOT [Courty et al.(2017)])

Main objectives and research hypothesis

The main objective of OATMIL is to develop new techniques for large-scale machine learning, encompassing adaptability, scalability, and robustness, by a cross-fertilization of ideas coming from OT and ML. This cross-fertilization leads to two complementary scientific challenges : bringing OT to ML and bringing ML to OT, which we discuss in more details below.

  1. Bringing OT to ML: our vision is that it is possible to develop new innovative machine learn- ing methodologies by leveraging fundamental advances in OT. We will primarily focus on three different subjects of interest that relate to fundamental problems in machine learning:
    • domain adaptation and transfer learning, where OT will be used to measure the discrepancy between (possibly multiple) domains or tasks. Then, new learning methods can be devised that exploit efficiently this measure to perform adaptation. A specific focus will also be given to heterogeneous domain adaptation,
    • distances over structured data, where OT will be used to measure similarities on graphs, trees or time-series, empowering new distances between structured data. The key problem here is to find efficient methods and algorithms to incorporate the structure in the geometry of the transport, and
    • factorization and coding, OT being used as a data-fitting loss functions in estimation prob- lems, where data can be interpreted as distributions (e.g power spectral density, spectral data).
  2. Bringing ML to OT: we strongly believe that advances in large scale ML optimization can help addressing well known numerical bottlenecks of OT. We will design efficient numerical tools to compute OT plans possibly associated to new regularization strategies fitting the machine learning needs. Our approach will be based on three complementary pillars:
    • dimension-reduction with sketching strategies to compress the empirical distribution of large-scale collections using a few nonlinear moments. Recent work has established PAC (probably approximately correct) bounds for certain learning tasks, and a particular challenge will be to extend this to the desired OT problems.
    • dedicated optimization schemes will be devised and adapted to fit the problem specificities: We envision to investigate fast stochastic/incremental optimization methods that efficiently handle the geometrical structure of bi-stochastic constraints or cutting-planes-like algorithms that relax constraints and gradually tighten them.
    • stochastic and online optimization strategies will be investigated for large OT problems, that will notably allow to consider OT as a new building brick for deep learning architectures. More specifically we plan to learn explicit transport maps instead of relying on probabilistic couplings.

Codes and demos

The Python Optimal Transport toolbox is being developed thanks to this project. Check the Github


  • [Courty et al.(2017)] Courty, N., Flamary, R., Habrard, A., Rakotomamonjy, A. 2017, 'Joint Distribution Optimal Transportation for Domain Adaptation', arXiv:1705.08848, accepted for publication at NIPS 2017

Project Consortium

List of the partners of the OATMIL project
  • Laboratoire Lagrange
  • Technicolor

Agence Nationale de la Recherche

IRISA (Obelix/Panama)

Université de Bretagne Sud

INRIA (Panama)

Université Côte d'Azur

Observatoire de la côte d'Azur




Please send all comments and questions on this project to Nicolas Courty