[NeurIPS 2019 Highlight] Rahul Singh @ MIT: Kernel Instrumental Variable Regression

Updated: Dec 22, 2019



Rahul Singh is a 3rd year Ph.D. candidate in Economics & Statistics at MIT. His research interests are causal inference and statistical learning theory. His paper "Kernel Instrumental Variable Regression", co-authored with Maneesh Sahani and Arthur Gretton from the University College London, is accepted as an oral presentation at NeurIPS 2019.


This paper bridges between econometrics and machine learning by proposing a kernel instrumental variable regression (KIV) for non-linear relations between variables. This algorithm, which only has three lines of code, can be used to learn causal relationships from confounded data for a wide range of applications, such as market demand estimation and imperfect compliance in A/B testing.


This episode is a live recording of Rahul Singh discussing highlights from his paper with Robin.ly at NeurIPS 2019 conference.


Interview with Robin.ly:


Robin.ly is a content platform dedicated to helping engineers and researchers develop leadership, entrepreneurship, and AI insights to scale their impacts in the new tech era.


Subscribe to our newsletter to stay updated for more NeurIPS interviews and inspiring AI talks:


Paper At A Glance


Instrumental variable (IV) regression is a strategy for learning causal relationships in observational data. If measurements of input X and output Y are confounded, the causal relationship can nonetheless be identified if an instrumental variable Z is available that influences X directly, but is conditionally independent of Y given X and the unmeasured confounder. The classic two-stage least squares algorithm (2SLS) simplifies the estimation problem by modeling all relationships as linear functions. We propose kernel instrumental variable regression (KIV), a nonparametric generalization of 2SLS, modeling relations among X, Y, and Z as nonlinear functions in reproducing kernel Hilbert spaces (RKHSs). We prove the consistency of KIV under mild assumptions and derive conditions under which convergence occurs at the minimax optimal rate for unconfounded, single-stage RKHS regression. In doing so, we obtain an efficient ratio between training sample sizes used in the algorithm's first and second stages. In experiments, KIV outperforms state of the art alternatives for nonparametric IV regression. [presentation slides]


Full Transcripts

Host: Wenli Zhou

We're at NeurIPS 2019 here with Rahul Singh and he is a PhD candidate in econ and statistical major at MIT. And we are here to discuss his newly submitted paper, Kernel Instrumental Variable Regression. Thank you so much for coming here and taking your time to share with us your ideas. Yeah, so can you tell us what is the paper about and what's the biggest contribution you think it does?


Rahul Singh

Sure. So it's joint work with Maneesh Sahani and Arthur Gretton at UCL Gatsby. And our goal is to learn a nonlinear causal relationship from confounded data. So in the presence of unobserved confounders, prediction and counterfactual prediction are different learning problems, and we still want to learn that counterfactual that counterfactual relationship


Host: Wenli Zhou

Why did you choose this topic like what is the industrial applications for this?


Rahul Singh

Yeah, I think they're they are two main industrial applications.


The first is demand estimation. And that was actually the original motivation for this instrumental variable idea in the first place. When we observe prices and quantities, these reflect lots of market forces, not just demand. And so if you just do the regression of quantities on prices, that'll be very biased away from the actual demand curve. Instrumental variables are a way to still estimate demand, even though we actually always observe confounded data, we actually always observe data. We actually always observe data, that's the result of market equilibrium.


Another important application is when there's imperfect compliance in A/B testing. So say we're interested in measuring the effect of some drug on some disease, we're able to randomize the drug, but there's imperfect compliance, patients don't always do what they're told. And in particular, there could be an unobserved confounder. That affects that story. It could be the case that say a patient is assigned the drug but doesn't take it because It's hard to keep up with the treatment regime to come back on, it's hard to keep up. Or a patient is not assigned, and they still take it because they leverage social capital from being wealthy. And so if we just do the regression of health outcome on treatment, it will be very biased; you would measure and it would mix up both the effect of the drug and the effective income. If you think about it as an instrumental variable problem, though, you can still disentangle the effect of the drug.


And actually, this imperfect compliance story is important for companies and digital platforms. Because anytime there's a randomized action that can be ignored -- anytime it's a recommendation or an advertisement -- anytime there's imperfect compliance, you're automatically in an instrumental variable framework. And so this approach is a good approach.


Host: Wenli Zhou

So what is the industry doing right now, besides this approach, what're their solutions on this?


Rahul Singh

So when you don't know how to handle imperfect compliance in A/B testing, then they'll do something called intention to treat. So we'll look at the effect on your health outcome based on what you were randomized to do. But actually, we can measure something a bit more nuanced, which is actually the, you know, the counterfactual relationship of interest, if you know it's an instrumental variable setting. When it comes to demand estimation, I think that's a setting in which a lot of times companies don't know that they should be using an instrument, they should be using some supply cost shifter, say gasoline prices, when they're estimating the demand for airline ticket prices, for example. So I think that there is scope for people to use a method like this one.


Host: Wenli Zhou

Yeah. So the method that you did research on, what is the real-life applications, like is this going to be an app? how do you market it?


Rahul Singh

Certainly Yeah, so I continue. Yeah, the nice thing about the algorithm is that it's really simple. It's just two kernel ridge regressions and so for this reason, it has a closed form solution, and it can be implemented in just three lines of code. We posted that code on my website in GitHub where you can download it. Okay, open source. It's open source and also because it's just three lines of code, anyone could write it themselves, just based on paper. I mean, we write those three lines and you can type it up if you like,


Host: Wenli Zhou

okay, okay, that's very nice of you and team. Well, thank you so much for coming here and share with us your ideas.

Leadership, entrepreneurship, and AI insights

  • LinkedIn Social Icon
  • YouTube Social  Icon
  • Twitter Social Icon
  • Facebook Social Icon
  • Instagram Social Icon
  • SoundCloud Social Icon
  • Spotify Social Icon

© 2018 by Robin.ly. Provided by CrossCircles Inc.