[NeurIPS 2019 Highlight] Martin Schrimpf @ MIT: Brain-Like Object Recognition with Recurrent ANNs

Updated: Jan 16



This episode is an interview with Martin Schrimpf Ph.D. student in Brain and Cognitive Sciences at MIT BCS. He shared highlights from his paper Brain-Like Object Recognition with High-Performing Shallow Recurrent ANNs, which was accepted for oral presentation at NeurIPS 2019 conference.


This paper proposed a quantitative collaboration between neuroscience and machine learning by representing a brain score that allows you to compare models with the brain and developing a brain-like model, CORnet-S, which transforms deep networks into much more shallow networks with recurrence. This research will help build architectures and networks that are more like the brain and improve energy efficiency for computing.


Martin's main interest is in bridging Machine Learning and Neuroscience with a focus on building deep neural network models of the brain’s ventral stream that are more human-like in their behavior as well as their internals. His previous work includes research in computer vision at Harvard, and natural language processing and reinforcement learning at Salesforce.


Interview with Robin.ly:


Robin.ly is a content platform dedicated to helping engineers and researchers develop leadership, entrepreneurship, and AI insights to scale their impacts in the new tech era.


Subscribe to our newsletter to stay updated for more CVPR interviews and inspiring talks:


Paper At A Glance


Deep convolutional artificial neural networks (ANNs) are the leading class of candidate models of the mechanisms of visual processing in the primate ventral stream. While initially inspired by brain anatomy, over the past years, these ANNs have evolved from a simple eight-layer architecture in AlexNet to extremely deep and branching architectures, demonstrating increasingly better object categorization performance, yet bringing into question how brain-like they still are. In particular, typical deep models from the machine learning community are often hard to map onto the brain's anatomy due to their vast number of layers and missing biologically-important connections, such as recurrence. Here we demonstrate that better anatomical alignment to the brain and high performance on machine learning as well as neuroscience measures do not have to be in contradiction. We developed CORnet-S, a shallow ANN with four anatomically mapped areas and recurrent connectivity, guided by Brain-Score, a new large-scale composite of neural and behavioral benchmarks for quantifying the functional fidelity of models of the primate ventral visual stream. Despite being significantly shallower than most models, CORnet-S is the top model on Brain-Score and outperforms similarly compact models on ImageNet. Moreover, our extensive analyses of CORnet-S circuitry variants reveal that recurrence is the main predictive factor of both Brain-Score and ImageNet top-1 performance. Finally, we report that the temporal evolution of the CORnet-S "IT" neural population resembles the actual monkey IT population dynamics. Taken together, these results establish CORnet-S, a compact, recurrent ANN, as the current best model of the primate ventral visual stream. [presentation slides]





Full Transcripts


Robin.ly Host - Wenli Zhou

We are at NeurIPS 2019 here with Martin Schrimpf, and he is a PhD candidate from MIT. And he has a newly submitted paper that got accepted this year. And we're here to discuss about this. Thank you so much Martin for being here.


Martin Schrimpf

Thanks for having me.


Robin.ly Host - Wenli Zhou

Congratulations on your paper getting accepted.


Martin Schrimpf

Thank you.


Robin.ly Host - Wenli Zhou

Would you like to share with our community about the paper that you got accepted about the recurrent networks?


Martin Schrimpf

Sure, the overarching idea that we're looking at is that there's an increased opportunity again for machine learning and neuroscience to communicate and present two advances in that direction. So first we are presenting Brain-Score, which is a platform that allows you to compare models with the brain and specifically models of vision with the ventral stream. The platform, given the model, will tell you how closely matched you are to the brain under various measurements. Like do the model and brain internal, the neurons, do they match up? And do the models, and brain externals, the behavior, also match up. And then second we present CORnet-S, which is a model that we recently developed, and which we think is more brain-like. Specifically, this model transforms deep networks into much more shallow networks. And it still maintains high performance by the virtue of having recurrence.


Robin.ly Host - Wenli Zhou

So what is the biggest contribution this paper brings?


Martin Schrimpf

One is, just the idea of quantitative collaboration between neuroscience and machine learning, because in the past there have been very low-bit updates from neuroscience in the sense of "use neurons" or "use convolutions", but we think that by actually giving you a score, you might be able to sort of climb the gradient and specifically build architectures and networks that are more like the brain. And then second, the idea of having this recurrent model, instead of the usual deep networks might be something that we think could help with energy efficiency, and maybe save some parameters and flops and so forth.


Robin.ly Host - Wenli Zhou

Yeah, speaking of energy efficiency, so we would like to know what's the closest adaption, that in real life, do you have any, like business models that can use this?


Martin Schrimpf

Right. So from a science perspective, you can use the model, just as an in-silico version of the brain’s ventral stream. So instead of doing costly experiments on monkeys or humans, we can now just pre-run on this model and filter only the experiments that you need to run. This model might also be adapted for more machine learning applications in the sense of it being more shallow, less layers, so perhaps, (and we haven't explicitly tested this), building this on a hardware implementation you could save energy because you don't need to load layer all the parameters or all the time. Instead you can load them once and then reuse them by this recurrence.


Robin.ly Host - Wenli Zhou

Yes, yes. Are you talking to any investors or industrials yet? What's the limitation right now?


Martin Schrimpf

Right, so one major limitation is for model that we don't perfectly match the brain, right? Like we don't have an in-silico brain at the moment. And we're arguably far from that. Okay, so the model while being more like the brain, we think is still far from being there. So there's a lot of modeling work just to be done under that regime.


Robin.ly Host - Wenli Zhou

So what's your next step?


Martin Schrimpf

Well, so in our talk, we're going to explicitly invite machine learning modelers to submit their model to us. And perhaps we can thereby find a model that is more like the brain by just evaluating a range of models. And with more signal from the brain like measurements, we can develop such brain models together and find models that are overall more closely aligned to the thing in our heads.


Robin.ly Host - Wenli Zhou

Okay. Well, thank you so much for coming here to share with us your ideas, and we're looking forward to hearing more about your research and this topic. Thank you.


Martin Schrimpf

Thank you.

Leadership, entrepreneurship, and AI insights

  • LinkedIn Social Icon
  • YouTube Social  Icon
  • Twitter Social Icon
  • Facebook Social Icon
  • Instagram Social Icon
  • SoundCloud Social Icon
  • Spotify Social Icon

© 2018 by Robin.ly. Provided by CrossCircles Inc.