Laplace Reading Group


of the ENS

Welcome to the “Laplace” reading group, a series of seminars and informal discussions organized by the CFM-ENS Chair “Modèles et Sciences des Données.

In these meetings, researchers can give presentations about their current research interests or discuss interesting papers. As with the Data Science Colloquium, the goal is to initiate discussions between researchers from different fields that all have a common interest in large scale or high-dimensional data. You can check the list of the next seminars below and the list of past reading groups.

All are welcome to attend, feel free to propose topics that you would like to see discussed, or work that you would like to present (contact at the bottom).

Next seminars

January 29th, 2020, 11h00-12h00, room L382/L384, 24 rue Lhomond.
Stefano Sarao Mannelli (IPhT, CEA Saclay)
Title: Tutorial on Gradient Descent Dynamics
Abstract: Descent algorithms, such as (noisy) gradient descent (GD), are ubiquitously used in optimization. While the behaviour of GD optimization under a convex cost function is well understood, the same cannot be said in the non-convex setting. In fact, GD often deceives common intuition and finds deep minima in complex landscapes. So far the theoretical understanding trudge along behind the empirical successes of descent algorithms. In the context of machine learning, we have seen progresses in the seminal works of Saad and Solla in the 90s. The authors focused on the online setting, where the learning dynamics can count on new examples at every update, avoiding the issues of correlation between updates. Although we have seen progresses in this direction over the years, we are still far from a complete understanding. In particular, the problem of the full-batch (noisy) GD, where the dataset is fixed and correlations appear, is open for most of the problems. In this tutorial I will focus on the full-batch (noisy) GD. I will present techniques from physics that allow to write a set of dynamical equations that characterizes the evolution of Langevin dynamics and GD. Finally, I will show how this techniques have been applied to neuroscience and machine learning and describe their limitations.

Contact

If you want to subscribe to (or unsubscribe from) the mailing list please send a mail to Luca Saglietti.