Your browser is unsupported

We recommend using the latest version of IE11, Edge, Chrome, Firefox or Safari.

Nov 24 2020

Unsupervised domain adaptation by inferring untestable conditional independences through causal inference

Foundations of Data Science Seminar

November 24, 2020

3:30 PM - 4:30 PM

Address

Chicago, IL 60612

Sara Magliacane
Assistant Professor
University of Amsterdam

Abstract:

An important goal common to domain adaptation and causal inference is to make accurate predictions when the distributions for the source (or training) domain(s) and target (or test) domain(s) differ. In many cases, these different distributions can be modeled as different contexts of a single underlying system, in which each distribution corresponds to a different perturbation of the system, or in causal terms, an intervention. We focus on a class of such causal domain adaptation problems, where features and labels for one or more source domains are given, and the task is to predict the labels in a target domain with a possibly very different distribution. In particular, we consider the case in which there are no labels in the target domain (unsupervised domain adaptation) and the underlying causal graph, the intervention types and targets are unknown.

In this setting, a stable predictor would use a subset of features for which the conditional distribution of the label is invariant in the source and target domains, which can be expressed as a conditional independence. On the other hand, since there are no labels in the target domain, this conditional independence is untestable from the data. We propose an approach based on a theorem prover that can infer certain untestable conditional independences from other testable ones using ideas from causal inference, but without recovering the causal graph. Under mild assumptions, this allows us to find subset of features that are provably stable under arbitrarily large distribution shifts. We demonstrate our approach by evaluating a possible implementation on simulated and real world data.

Paper: https://arxiv.org/abs/1707.06422

Bio: Sara Magliacane is an assistant professor at the University of Amsterdam and a researcher at the MIT-IBM Watson AI lab. She received her PhD at the VU Amsterdam on logics for causal inference under uncertainty, and then joined IBM Research in Yorktown Heights as a postdoc. Her current research focuses on different aspects of causal inference and symbolic approaches, from structure learning from different datasets to active learning of causal graphs and applications of ideas of causal inference to transfer learning.

Zoomhttps://uic.zoom.us/j/89851329511?pwd=dEs2elUxbkc5SWwyZ0ZQV1JUQUdxUT09
(Meeting ID: 898 5132 9511, Passcode: 1v$%C!Er)

Contact

Elena Zheleva

Date posted

Nov 20, 2020

Date updated

Nov 20, 2020