Data Science Colloquium


of the ENS

Welcome to the Data Science Colloquium of the ENS.

This colloquium is organized around data sciences in a broad sense with the goal of bringing together researchers with diverse backgrounds (including for instance mathematics, computer science, physics, chemistry and neuroscience) but a common interest in dealing with large scale or high dimensional data.

The seminar takes place, unless exceptionally noted, on the first Tuesday of each month at 12h00 at the Physics Department of ENS, 24 rue Lhomond, in room CONF IV (2nd floor).

The colloquium is followed by an open buffet around which participants can meet and discuss collaborations.

These seminars are made possible by the support of the CFM-ENS Chair “Modèles et Sciences des Données.

You can check the list of the next seminars below and the list of past seminars.

Videos of some of the past seminars are available online.

Organizers

The colloquium is organized by:

Next seminars

Sept. 17th, 2019, 12h00-13h00, room CONF IV (physic dpt, Rue Lhomond).
Nathan Srebro (Toyota Technological Institute at Chicago)
Title: Optimization's Hidden Gift to Learning: Implicit Regularization
Abstract: It is becoming increasingly clear that implicit regularization afforded by the optimization algorithms play a central role in machine learning, and especially so when using large, deep, neural networks. We have a good understanding of the implicit regularization afforded by stochastic approximation algorithms, such as SGD, for convex problem, and we understand and can characterize the implicit bias of different algorithms, and can design algorithms with specific biases. But in this talk I will focus on implicit biases of local search algorithms for non-convex underdetermined problem, such as deep networks. In an effort to uncover the implicit biases of gradient-based optimization of neural networks, which holds the key to their empirical success, I will discuss recent work on implicit regularization for matrix factorization, linear convolutional networks, and two-layer ReLU networks, as well as a general bottom-up understanding on implicit regularization in terms of optimization geometry.