Welcome to the Data Science Colloquium of the ENS.
This colloquium is organized around data sciences in a broad sense with the goal of bringing together researchers with diverse backgrounds (including for instance mathematics, computer science, physics, chemistry and neuroscience) but a common interest in dealing with large scale or high dimensional data.
The seminar takes place, unless exceptionally noted, on the first Tuesday of each month at 12h00 at the Physics Department of ENS, 24 rue Lhomond, in room CONF IV (2nd floor).
The colloquium is followed by an open buffet around which participants can meet and discuss collaborations.
These seminars are made possible by the support of the CFM-ENS Chair “Modèles et Sciences des Données”.
You can check the list of the following seminars bellow and the list of past seminars.
Videos of some of the past seminars are available online.
The colloquium is organized by:
June 27th, 2017, 11h, room Amphi Burg, Institut Curie, 12 rue Lhomond, sous-sol.
Guillermo Sapiro (Duke University)
Title: Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems
Abstract: Security, privacy, and fairness have become critical in the era of data science and machine learning. More and more we see that achieving universally secure, private, and fair systems is practically impossible. We have seen for example how generative adversarial networks can be used to learn about the expected private training data; how the exploitation of additional data can reveal private information in the original one; and how what seems as unrelated features can teach us about each other. Confronted with this challenge, in this work we open a new line of research, where the security, privacy, and fairness is in a closed environment. The goal is to ensure that a given entity, trusted to infer certain information with our data, is blocked from inferring protected information from it. For example, a hospital might be allowed to produce diagnosis on the patient (the positive task), without being able to infer the irrelevant gender of the subject (negative task). Similarly, a company can guarantee they internally are not using the provided data for any undesired task, an important goal that is not contradicting the virtually impossible challenge of blocking everybody from the undesired task. We design a system that learns to perform the positive task while simultaneously being trained to fail at the negative one, and illustrate this with challenging cases where the positive task is actually harder than the negative one. The particular framework and examples open to door to security, privacy, and fairness in very important closed scenarios. Joint work with Jure Sokolic and Miguel Rodrigues.