Optimal Transport (OT) has emerged as a foundational tool in modern machine learning, primarily due to its capacity to provide meaningful comparisons between probability distributions. One of the key advantages of OT lies in its dual nature: it introduces a mathematically rigorous framework that defines Wasserstein distances but also constructs an optimal coupling (or transport plan) between distributions. This coupling reveals explicit correspondences between samples, enabling a wide range of applications. Despite the many successes of optimal transport in machine learning, and despite many tools to approximate the Wasserstein distance, computing OT plans remains a computationally challenging problem.
This talk will introduce the concept of optimal transport plans, explain their importance for tasks such as data alignment and domain adaptation, and review recent advances in their approximation, with a particular focus on slicing-based strategies.