Episode 84: A Conversation with David Cox

In this episode, Byron and David Cox of IBM discuss classifications of AI, and how the research has been evolving and growing.

:: ::

Guest

David Cox is the IBM Director of the MIT-IBM Watson AI Lab, a first of its kind industry-academic collaboration between IBM and MIT, focused on fundamental research in artificial intelligence. The Lab was founded with a $240m, 10 year commitment from IBM and brings together researchers at IBM with faculty at MIT to tackle hard problems at the vanguard of AI.

Prior to joining IBM, David was the John L. Loeb Associate Professor of the Natural Sciences and of Engineering and Applied Sciences at Harvard University, where he held appointments in Computer Science, the Department of Molecular and Cellular Biology and the Center for Brain Science. David's ongoing research is primarily focused on bringing insights from neuroscience into machine learning and computer vision research. His work has spanned a variety of disciplines, from imaging and electrophysiology experiments in living brains, to the development of machine learning and computer vision methods, to applied machine learning and high performance computing methods.

David is a Faculty Associate at the Berkman-Klein Center for Internet and Society at Harvard Law School and is an Agenda Contributor at the World Economic Forum. He has received a variety of honors, including the Richard and Susan Smith Foundation Award for Excellence in Biomedical Research, the Google Faculty Research Award in Computer Science, and the Roslyn Abramson Award for Excellence in Undergraduate Teaching. He led the development of "The Fundamentals of Neuroscience" (http://fundamentalsofneuroscience.org) one of Harvard's first massive open online courses, which has drawn over 750,000 students from around the world.  His academic lab has spawned several startups across a range of industries, ranging from AI for healthcare to autonomous vehicles.

Transcript

Byron Reese: This is Voices in AI, brought to you by GigaOm and I’m Byron Reese. I'm so excited about today's show. Today we have David Cox. He is the Director of the MIT IBM Watson AI Lab, which is part of IBM Research. Before that he spent 11 years teaching at Harvard, interestingly in the Life Sciences. He holds an AB degree from Harvard in Biology and Psychology, and he holds a PhD in Neuroscience from MIT. Welcome to the show David!

David Cox: Thanks. It’s a great pleasure to be here.

I always like to start with my Rorschach question which is, “What is intelligence and why is Artificial Intelligence artificial?” And you're a neuroscientist and a psychologist and a biologist, so how do you think of intelligence?

That's a great question. I think we don't necessarily need to have just one definition. I think people get hung up on the words, but at the end of the day, what makes us intelligent, what makes other organisms on this planet intelligent is the ability to absorb information about the environment, to build models of what's going to happen next, to predict and then to make actions that help achieve whatever goal you're trying to achieve. And when you look at it that way that's a pretty broad definition.

Some people are purists and they want to say this is AI, but this other thing is just statistics or regression or if-then-else loops. At the end of the day, what we’re about is we're trying to make machines that can make decisions the way we do and sometimes our decisions are very complicated. Sometimes our decisions are less complicated, but it really is about how do we model the world, how do we take actions that really drive us forward?

It's funny, the AI word too. I'm a recovering academic as you said. I was at Harvard for many years and I think as a field, we were really uncomfortable with the term ‘AI.’ so, we desperately wanted to call it anything else. In 2017 and before we wanted to call it ‘machine learning’ or we wanted to call it ‘deep learning’ [to] be more specific. But in 2018 for whatever reason, we all just gave up and we just embraced this term ‘AI.’ In some ways I think it's healthy. But when I joined IBM I was actually really pleasantly surprised by some framing that the company had done.

IBM does this thing called the Global Technology Outlook or GTO which happens every year and the company tries to collectively figure out—research plays a very big part of this—we try to figure out ‘What does the future look like?’ And they came up with this framing that I really like for AI. They did something extremely simple. They just put some adjectives in front of AI and I think it clarifies the debate a lot.

So basically, what we have today like deep learning, machine learning, tremendously powerful technologies are going to disrupt a lot of things. We call those Narrow AI and I think that narrow framing really calls attention to the ways in which even if it's powerful, it's fundamentally limited. And then on the other end of the spectrum we have General AI.  This is a term that's been around for a long time, this idea of systems that can decide what they want to do for themselves that are broadly autonomous and that's fine. Those are really interesting discussions to have but we're not there as a field yet.

In the middle and I think this is really where the interesting stroke is, there's this notion we have a Broad AI and I think that's really where the stakes are today. How do we have systems that are able to go beyond what we have that’s narrow without necessarily getting hung up on all these notions of what ‘General Intelligence’ might be. So things like having systems that are that are interpretable, having systems that can work with different kinds of data that can integrate knowledge from other sources, that’s sort of the domain of Broad AI. Broad Intelligence is really what the lab I lead is all about.

There's a lot in there and I agree with you. I’m not really that interested in that low end and what’s the lowest bar in AI. What makes the question interesting to me is really the mechanism by which we are intelligent, whatever that is, and does that intelligence require a mechanistic reductionist view of the world? In other words, is that something that you believe we're going to be able to duplicate either… in terms of its function, or are we going to be able to build machines that are as versatile as a human in intelligence, and creative and would have emotions and all of the rest, or is that an open question?

I have no doubt that we're going to eventually, as a human race be able to figure out how to build intelligent systems that are just as intelligent as we are. I think in some of these things, we tend to think about how we're different from other kinds of intelligences on Earth. We do things like… there was a period of time where we wanted to distinguish ourselves from the animals and we thought of reason, the ability to reason and do things like mathematics and abstract logic was what was uniquely human about us.

And then, computers came along and all of a sudden, computers can actually do some of those things better than we can even in arithmetic and solving complex logic problems or math problems. Then we move towards thinking that maybe it's emotion. Maybe emotion is what makes us uniquely human and rational. It was a kind of narcissism I think to our own view which is understandable and justifiable. How are we special in this world?

But I think in many ways we're going to end up having systems that do have something like emotion. Even you look at reinforcement learning—those systems have a notion of reward. I don't think it’s such a far reach to think maybe we'll even in a sci-fi world have machines that have senses of pleasure and hopes and ambitions and things like that.

At the end of day, our brains are computers. I think that's sometimes a controversial statement but it's one that I think is well-grounded. It's a very sophisticated computer. It happens to be made out of biological materials. But at the end of the day, it's a tremendously efficient, tremendously powerful, tremendously parallel nanoscale biological computer. These are like biological nanotechnology. And to the extent that it is a computer and to think to the extent that we can agree on that, Computer Science gives us equivalencies. We can build a computer with different hardware. We don’t have to emulate the hardware. We don't have to slavishly copy the brain, but it is sort of a given that will eventually be able to do everything the brain does in a computer. Now of course all that's all farther off, I think. Those are not the stakes—those aren't the battlefronts that we're working on today. But I think the sky's the limit in terms of where AI can go.

You mentioned Narrow and General AI, and this classification you’re putting in between them is broad, and I have an opinion and I'm curious of what you think. At least with regards to Narrow and General they are not on a continuum. They're actually unrelated technologies. Would you agree with that or not?

Would you say like that a narrow (AI) gets a little better then a little better, a little better, a little better, a little better, then, ta-da! One day it can compose a Hamilton, or do you think that they may be completely unrelated? That this model of, ‘Hey let's take a lot of data about the past and let's study it very carefully to learn to do one thing’ is very different than whatever General Intelligence is going to be.

There's this idea that if you want to go to the moon, one way to go to the moon—to get closer to the moon—is to climb the mountain.

Right. Exactly.

And you'll get closer, but you're not on the right path. And, maybe you'd be better off on top of a building or a little rocket and maybe go as high as the tree or as high as the mountain, but it'll get you where you need to go. I do think there is a strong flavor of that with today's AI.

And in today's AI, if we're plain about things, is deep learning. This model… what's really been successful in deep learning is supervised learning. We train a model to do every part of seeing based on classifying objects and you classify a lot - many images, you have lots of training data and you build a statistical model. And that's everything the model has ever seen. It has to learn from those images and from that task.

And we're starting to see that actually the solutions you get—again, they are tremendously useful, but they do have a little bit of that quality of climbing a tree or climbing a mountain. There’s a bunch of recent work suggesting… basically they're looking at texture, so a lot of solution for supervision is looking at the rough texture.

There are also some wonderful examples where you take a captioning system—a system can take an image and produce a caption. You can produce wonderful captions in cases where the images look like the ones it was trained on, but you show it anything just a little bit weird like an airplane that’s about to crash or a family fleeing their home on a flooding beach and it'll produce things like an airplane is on the tarmac at an airport or a family is standing on a beach. It's like they kind of missed the point, like it was able to do something because it learned correlations between the inputs it was given and the outputs that we asked it for, but it didn't have a deep understanding. And I think that's the crux of what you're getting at and I agree at least in part.

So with Broad, the way you’re thinking of it, it sounds to me just from the few words you said, it's an incremental improvement over Narrow. It's not a junior version of General AI. Would you agree with that? You're basically taking techniques we have and just doing them bigger and more expansively and smarter and better, or is that not the case?

No. When we think about Broad AI, we really are thinking about a little bit ‘press the reset button, don't throw away things that work.’ Deep learning is a set of tools which is tremendously powerful, and we'd be kind of foolish to throw them away. But when we think about Broad AI, what we're really getting at is how do we start to make contact with that deep structure in the world… like commonsense.

We have all kinds of common sense. When I look at a scene I look at the desk in front of me, I didn't learn to do tasks that have to do with the desk in front of me by lots and lots of labeled examples or even many, many trials in a reinforcement learning kind of setup. I know things about the world - simple things. And things we take for granted like I know that my desk is probably made of wood and I know that wood is a solid, and solids can't pass through other solids. And I know that it's probably flat, and if I put my hand out I would be able to orient it in a position that would be appropriate to hover above it…

There are all these affordances and all this super simple commonsense stuff that you don't get when you just do brute force statistical learning. When we think about Broad AI, we're really thinking about is ‘How do we infuse that knowledge, that understanding and that commonsense?’ And one area that we're excited about and that we're working on here at the MIT IBM Lab is this idea of neuro-symbolic hybrids.

So again, this is in the spirit of ‘don't throw away neural-networks.’ They're wonderful in extracting certain kinds of statistical structure from the world – convolutional neural network does wonderful job of extracting information from an image. LSDMs and recurrent neural networks do a wonderful job of extracting structure from natural language, but building in symbolic systems as first-class citizens in a hybrid system that combines those all together.

Some of the work we're doing now is building systems where we use neural networks to extract structure from these noisy, messy inputs of vision and different modalities but then actually having symbolic AI systems. Symbolic AI systems have been around basically contemporaneous with neural networks. They've been ‘in the wings’ all this time. Neural networks deep learning is in any way… everyone knows this is a rebrand of the neural networks from the 1980s that are suddenly powerful again. They’re powerful for the first time because we have enough data and we have enough compute.

I think in many ways a lot of the symbolic ideas, sort of logical operations, planning, things like that. They're also very powerful techniques, but they haven't really been able to shine yet partly because they've been waiting for something—just the way that neural networks were waiting for compute and data to come along. I think in many ways some of these symbolic techniques have been waiting for neural networks to come along—because neural networks can kind of bridge that [gap] from the messiness of the signals coming in to this sort of symbolic regime where we can start to actually work. One of things we're really excited about is building these systems that can bridge across that gap.

I'm still playing catch-up to all the things you said earlier, and I want to set something up for you and get your reaction to it. It's all going to be based on this statement that ‘brains are computers.’ But let me set up a different way to phrase the question. You’re right that people have all along said there's something different about humans. And Stephen Wolfram was on the show and he said “We always set a bar and then the computers go past it.” First the animal kingdom goes past it. We’re the only ones that use tools and Jane Goodall said chimps use tools and then you know we're the only ones that… all these things.

And then computers would never be able to play chess, and then eventually do. And people who tell that story say ‘ergo there's nothing special about us.’ And yet I think that kind of misses what the question is trying to ask. I think people have an intuitive sense that there's something different about us and they're trying to grasp what that is. They're trying to wrap their head around it.

You say brains are computers, but that seems to me to be a conclusion drawn only from an absence as it were of alternatives. I would say it this way: “We have brains that we don't understand.” There's a nematode worm, dude has got 302 neurons in his brain, and people who spent 20 years trying to model that to make a virtual nematode worm in the Open Worm project and they don't even know if that's possible. So we don't even know like a neuron maybe as complex may have operations that occurred down the planck scale for all we know, and you would know that better than anybody. I'm sure you know all about the nematode worm and all that.

So we had these brains. We don't know how thoughts are encoded, how they’re retrieved, any of that. But then wait, there's more. We have minds and minds are all of these things that the brain does that seems to be beyond its reach. Your liver doesn't have a sense of humor, your stomach isn't emotional and yet somehow your brain does these things. You have a mind. And then finally, the big one is we're conscious. We experience the world. We don't just measure temperature. We feel warmth. And that's something.

It's the last great scientific question. We don't even know how to ask scientifically or what the answer would look scientifically. I take all of that to say we have brains we don't understand that give rise to minds we don't understand that somehow can experience the universe as opposed to sense it. Ergo we're computers. And I find that ‘ergo’ to be such a logical disconnect from everything below it. And it seems to me people who say brains are computers say “Well, what else could they be?” Given all of that mystery about how we were able to do what we were able to do whatever that is, how can you say with such confidence like as a statement, “Brains are computers and that's all they are”? How do you justify that?

Good. I agree, actually. I agree with everything you said and when we say that brains are computers, I think the disconnect there is that that's actually a really weak statement that doesn't constrain; that doesn't tell us what to do at all; that’s just to say at some level, what we're saying is there's many times when we say that, what we're really saying is this is a statement of materialism. We don't believe there's anything outside of that. There doesn't need to be any magic dust beyond…

To interrupt there, everyone that I had this conversation with always uses the word ‘magic’ as if that's the only other thing. You can either be a scientific rationalist with a reductionist view of the universe or you believe in magic. It's like those are your two choices. And I don't know that that's true.

I don't know that you can't say intelligence as we have it is a physical property and yeah it's governed by the laws of physics. But we may not be able to reproduce its effects in any form in a fab… I’m now going back to the Dark Ages and looking at chicken entrails all of a sudden.

No. I agree. And to be fair too, there are smart people, who have made the case that there's other kinds of scientific magic like there are people who claim that the brain is quantum.

I mean Rose and his…

Exactly. And I happen to think that's outlandish and almost certainly not true. But it doesn't shift the ground. I think you're right. There's a piece of it which I think is at some level reductionist, but the claim is so limited that I think we could probably agree on it, which is to simply say that  the mind is a product of the brain. There's no magic. And I think we can all agree that some of us agree that there is enough complexity in what the brain does and how the brain is built.

There’s plenty of headroom to have these emergent properties like intelligence, like subjective experience, like consciousness. And then in some ways, if we if we agree on a scientific materialism, there could be things we don't know about. But from where we're sitting right now it looks to me at least like there's enough complexity there to account for it. Now, I think what you're saying—which I also agree with—is there's no guarantee that we could ever pretend we could ever understand that, right? It may just be so complex that they're just simply saying like “Hey, it has a physical basis and there's no supernatural component to it.”

My early question to you was, “Are we going to build machines as versatile as humans?” And you said there's no question.

Yes. For me, this is great. My guess is that we will. That's me putting my thumb up in the air and testing the winds and making a judgment about whether we'll get there or not. I don't think there's any hard guarantee—even if we accept the brain as computer, even if you're going to accept that everything is material, even if we accept that it doesn't require any physics and we don't know basically know about, even if we accepted that quantum was important.

I think we're on our way to getting to harness quantum and understand it as well. And that's certainly something that IBM plays a fair bit in as well and we're making progress. But, I think there are two questions, which end up being much more like making value judgments or making guesses. Will we be able to understand it? And my guess is yes, I think we're going to be able to get there. I don't think there's any barrier that's going to prevent us from getting there. Then, the second question is ‘How long would it will take?’ And that's the place where I think you can ask scientists ‘if’ questions and they’re not always going to be right, but they'll they'll be more right than if you ask them ‘when’ questions because I think it's very hard to predict when we can get a handle on some of these things.

And I think we're at an interesting inflection point right now in AI where we've made tremendous progress very quickly. Like five, six years, we went from things that when I was in grad school, I wasn't sure we were ever going to be able to address, and all of a sudden, we have these technologies. There's an open question whether the party keeps going or whether we flatten off again and progress becomes much, much slower. And then we have to wait til the next advance. I think those things are anybody's guess, but my guess is that we're going to solve these problems. We’re going to build systems that are intelligent and potentially even as intelligent as [we are] and that might even happen in our lifetimes.

But it would probably be better if we didn't, right? Because if we build things that  intelligent, they invariably presumably might have subjective experience and I feel pain and we would actually never know, right? Philosophy doesn't even acknowledge that I exist to you. Then all of a sudden, if they become entities that can feel pain, then you can no longer have them plunge your toilet. You can no longer throw them on the heap when they're done, so do they?

And it's taken us a long time to get… In the ‘90s, veterinarians in this country were still trained that animals can't feel pain, that there was just a reaction, just like if you poke up an amoeba, you don't recoil but you don't think the amoeba feels the pain. And so one wonders if we have these simple systems, if it is as straightforward as I seem to be thinking you're saying, that we have simple systems now that could feel some very simple form of pain and suffering.

Yes. I think that's why this distinction between Broad AI and General AI is actually really helpful.

That's right. That's what I was asking you. Are you just making a better clockwork? You know it's still, you wind the clock and it unwinds or are you building something that could potentially be the first steps of something that could experience something?

Yes. To be clear, nobody here at IBM, nobody at M.I.T., I would say basically nobody today is credibly working on anything like a system that could feel pain or pleasure or anything. That's just sort of not what people are [doing]. It's not even what the battle lines are.

It would be that it comes about accidentally like, does a tree feel pain? It’s like how would we know? And we share half our DNA with them. So how would we know if a system we built got enough complexity that it had this emergent property that it could experience? How would you know?

And then philosophically we can't know for sure that the people around us are conscious.

Right.

Lack of experience is the leader. I think that the key thing though is just that these are great philosophical discussions, but we're just so far from that. The systems we're building are very simple. I think what we're aiming for and what our goal really is, it's not to build a marginally better learning system, but it's also not to build something that has subjective experience or anything like that.

We're really trying to say; “How can we get more of the real structure in data?” because many of the problems we want to solve are hard problems. They're important problems for humanity to solve. How do we build systems that can start to get a little bit of that logical reasoning to start to get a little bit of that commonsense knowledge that I mentioned earlier about things? That's really important for us to get so we can solve these problems. And I think that's secondary to [the question] ‘Will we have systems that can have subjective experience?’ and things like that.

But in all fairness, you did say it could happen in our lifetime.

I think it could.

So it isn’t premature to even think about these things now.

I don't think it is. One thing I would say is, philosophers absolutely can, are and should be thinking about these kinds of issues. I think though in terms of the broader public debate on these issues, I think there's a lot of ethical issues we should address before that.

Absolutely. Privacy and explainability and we can certainly launch into all of those if you're curious. You're just a unique dude, like you're a neuroscientist. You have a Ph.D. in Neuroscience from M.I.T. lf anybody's at the forefront of thinking about this, I would expect it would be somebody like you who has a foot in two worlds - this world of biology and this world of ‘whatever we're going to call it.’ So, I'm almost done with all of this and I'm so excited to talk about what you're doing because it'll be the real stuff.

I only have two more questions to ask you along these lines. The first is this huge disconnect that when I had people on the show (and I've had 100) and I ask, “Are people machines?” And really, I’m just asking that mechanistic question a more direct way. All of them with the exception of 4, say “Yes of course we're machines. Of course.” When I put that question on my website to the general public, only 15% of people say yes. 85% of people say no.

And I think if you were to drill down they would say simply, “Well computers don't have souls,” or they would appeal to something that is beyond what science can measure and deal with. And so I personally see this huge disconnect with the group of people who work on these technologies who have this view that people are machines and then this whole world that doesn't believe that. But they don't really know that that's the underlying assumption.

They would all sleep better at night. Like if Elon [Musk] is worried about AI, General AI, he's worried about it because he thinks people are machines and therefore someday we'll build a mechanical person. And then 18 months later, it'll be twice as smart. A few months later twice as smart, ad infinitum. If people knew that the underlying assumption of that was people are solely machines, suddenly they would they would feel better. Do you think this intellectual disconnect that I am talking about is a real thing and do you think it matters? Or are you have no opinion about it?

I think absolutely that disconnect matters. There's a big disconnect. I think there are disconnects across the spectrum of science and society. There's a lot of things that scientists believe because they're in contact with… they're doing that day to day. Whereas the public, if you aren't exposed to that, you have intuitions and you know there's lots of good psychology and social science that says we operate a lot by intuition.

Having been a neuroscientist in my past and putting electrodes in brains and measuring activity of neurons, you're in the business of studying the machine. You can make the machine do different things. You can see how it's working. There's a very engineering mindset that neuroscience takes to understanding the brain. The one thing I think the public also doesn't understand is that science isn't out to try and take everything, right? It's not like all the territory has to be science—it’s not like all the territory can be science.

You asked the question about consciousness. I don't know that we have the ability to ask scientific questions about subjective experience like it is just if you can't frame things in such a way that they're falsifiable—this is sort of the Karl Popper view of science—then you know they're not science, and that's not to say they're wrong. It's not to say they aren't important. It's just to say that science isn't the tool to get at those.

I think there's a sense in which people, when they say and when they hear the words “we're just machines,” they think that we're making a pejorative statement about the quality of the human experience and that's not true. We can be wonderfully complex beautiful machines and we can have all of the depth of human emotion and all of the literature that can be beautiful. There isn't anything that's incompatible between having us being machines and then having it be a machine that we could in principle understand.

We don't understand it, we're very far from understanding it, but in principle we can chip away at what we don't understand. That's not incompatible with the idea that we can experience beauty, we can see a sense that we can have that pleasure, and that experience of the aesthetics of that sunset. Those things aren't incompatible, and I think that sometimes, that’s what's tied up in this public rejection of the idea that we could possibly be a machine… because I think that the feeling is ‘how could we possibly be machines’ when I can hear the voice of my daughter and feel happiness? Those things—to a scientist—aren't incompatible and there's an extra layer of beauty I would say as well as we understand how intelligence works. There’s just tremendous complexity both in the brain and also already in the artificial intelligence systems we're building and understanding that has an aesthetic beauty to it; mathematics has an aesthetic beauty to it.

I think that's part of where the disconnect comes. I don't know how to resolve that disconnect other than just to constantly be in dialogue with the public, with government, with policymakers, so that they understand what the technology is. They know what to what to be worried about and what not to be worried about. And I think your show [and] other shows like these, [are] clearly a sign that the public is interested. I think it's incumbent upon us to meet that interest as much as we can, and we try. And certainly the academic community and businesses as well do our best to communicate to the public, but I think it's something we can always do more of.

The last question along these lines is, when I sit down and try to list all people who are actually working on general intelligence, because 99.9% of the money that flows into it just wants to solve a problem; just wants your coffee to taste good, or whatever it is that it's trying to do… When I list people that I think are working on general intelligence, I come up with like half a dozen. Do you think that the amount of resources allocated to that problem is as tiny as I suspect it is, or you would be privy to more information? Do you think there are groups of people that are sitting around thinking that?  

I think the question is, “Which part of general intelligence?” or “Are there people who are working on understanding natural general intelligence?” Absolutely. That's the fields of psychology and to a lesser extent, neuroscience. I think neuroscience has a certain kind of humility to it as a field but they're people who accuse neuroscientists of being reductionist but they're staying in their swim lane and asking questions they know how to ask, which I think is good in terms of artificial general intelligence: again, I think the reason… there isn't as much research about that [is] partly because it's maybe in many ways too early. I think a lot of people are looking at it and saying “Look, we don't even understand how the narrow things we build work, like neural networks. It's safe to say we don't really fully understand how the things we built work.” And that leads to all kinds of confusion like adversarial examples. So, we can hack neural networks and that's a vulnerability that comes from the fact that we fundamentally don't fully understand what we've built. That’s one side of it. It's just too early. We don't even understand the simple things we built. So, there's a lot of work to go.

And then the other the other side of it is I think, one of the reasons why there aren't as many people working on general intelligence, and you've alluded to this already is, I'm not sure that's necessarily a desirable thing in the grandest version of those goals. Like I don’t know that we want to go and ask questions about [whether we] should build a system that can feel sad, or feel pain or feel suffering? It might be an interesting intellectual exercise to think about whether such a thing could be built. But I certainly don't want to be building that, and I think most of the people who I work with and who I interact with are much more pragmatic about building technologies that help humanity, right?

And then again this is where I think this ‘Broad versus General’ divide is helpful because there's plenty of room for us to improve intelligence, artificial intelligence in this broad range where we’re not trying to build systems that are conscious. We’re not trying to build something that has these emergent properties. We're just trying to endow those parts that we're good at so that we can free up humans from them, and so we can go beyond what we can do and again solve hard problems that help humanity.

So, thank you for all of that and I learned a lot and I appreciate your being so patient with my questions. Now, we'd love to shift the conversation to you—you're the Director of the MIT IBM Watson AI Lab at IBM Research. Tell me about that partnership. What is different about the MIT IBM Watson Lab that is new or innovative?

The industry and academia have always had connections to each other. Industries always funded academic research for instance, and when I was an academic, we had lots of funding from industrial sources and it was good. And I think that's healthy. The thing is in today's landscape with AI, that interaction between industry and academia is sort of a little bit fraught.

One thing that's happened that I think is unique in the history of academia is the extent to which industry is basically poaching people out of academia. It's like the whole departments. Uber for instance, funded a bunch of pilot projects with Carnegie Mellon and then they like those projects, then they like those projects so much that they sort of scooped up the entire department en masse and moved them a few blocks down the street, and drop them down and said, “Okay now you're in Pittsburgh.” And Carnegie Mellon was looking around saying, “What just happened?” Our army got hollowed out.

There's this flux of people out of academia and you know I would argue that that's destroying something that's distinctive about academia like a little bit of ‘eating the seed corn’ or ‘killing the golden goose’—whichever metaphor you want to use. We were looking at IBM for… ways we can embrace academia, support academia and really get something out of it that would really drive real impact.

The other mode of interaction that academia and industry often has is simply that industry will fund academic research. For instance, when I was at Harvard, Google will give us some money or other companies. And typically, it was not a large amount of money and we'd sort of say thank you, only go off and do whatever we're going to do and really what they were doing. And in many ways, we're sort of buying an option on hiring our students out from under us and we all understood that it was fine. It was good, but they weren't really extracting real value out of that collaboration and as a result, it also always tended to be kind of smaller.

When IBM looked at it how are we going to arrange our academic strategy, there is a brand new idea or not a brand-new idea but a new idea, a different idea which was, ‘Could we form a lab that really lets academia be academia that we can support them, but we really wanted us to be deeply collaborative?’

The way our model works is that the lab was founded, it was announced a little bit over a year ago: almost a quarter billion-dollar investment over 10 years to form a joint lab. The way we work is that all of our projects are actually collaborations. Every project is jointly conceived and jointly executed by IBM researchers and MIT faculty.  I should say IBM Research—we're a 5,000 person (give or take) research organization. We're the size of a medium-sized university. We're meeting together, we're asking what's the next thing that will move the needle in AI. We we want to do that jointly together with our MIT colleagues, and we're structured in such a way that we make all of our decisions together.

I'm the Director on the IBM side of the lab, but Antonio Torralba, out of my TNC sale, is the Director on the MIT side and we have a joint governance structure where we decide together what we're going to work on, and then those projects are executed together, and I think that's a really powerful model that hasn't been tried at this scale before.

And where are you in the life cycle of that? How long have you been doing it? And do you have any kind of early results to share or what are your projects?

We have a portfolio of almost 50 projects that are running and again all of these are joint between IBM researchers and MIT faculty and the longest running of those projects have been running for about a year. It's still very early days, but we're already publishing in the top conferences.

On that neurosymbolic theme that I mentioned earlier where we're building hybrid systems that combine the strengths of neural networks with the strengths of symbolic AI systems are some of our researchers here at IBM together with Josh Tenenbaum’s Lab at MIT have built some of the first of the very exciting examples of neuro-symbolic hybrid systems. We've had recent papers in NeuroPs which is pretty much the top conference today in AI. And also in ICLR, which is the next one, spotlight papers where we're sort of starting to explore building those little rockets rather than climbing the tree or climbing the mountain—building those little simple systems that are starting to give us little glimpses of what the future might hold.

That's been very exciting. We have worked in causal inference, so we think that causality is very important—that something is missing from today's AI—understanding cause and effect and not just correlation. We have a number of people working on projects in that area. We also have projects in… our profile is relatively broad. We have projects even also in the physical, we call the Physics of AI, asking how new kinds of computing hardware might influence the progress of AI, things like analog computing. How do we build non-digital systems that can help accelerate AI and potentially dramatically decreasing the power it takes to run today's AI algorithms? And also quantum.

We have projects we're working with Peter Shor for instance at MIT of Shor's algorithm fame who is helping us think about how might quantum and AI interact with each other. I think nobody really knows the answer to that question, but quantum computing is fundamentally new, a fundamentally different way of doing computing and it's anybody's guess how these two fields interact with each other. That gives you a little bit of a sense of the breadth and the long-range thinking that we're doing at this lab.

How do you decide what to do? It’s kind of like a great frontier. You're in the early 19th century heading west across the United States and you can go any direction as far as you want. How do you decide what to do?

That's a that's a fantastic question. And that is the hard part of this job. That's also the heart and it's also the fun part of this job, right? M.I.T. has an incredible breadth of activity going on. We at IBM Research have an incredible breadth of activity in many ways. I think it's as much a question of what shouldn't we do? What should we decide not to do?

And one thing we decided not to do very early on was even though you might think that IBM would be very focused on applications, we are a company, we serve enterprises. Surely, we must be interested in applications and we are interested in applications. We’re not a philanthropy. We're trying to serve our customers, help them do their work better. But for this tool that we have in our chest, the MIT IBM Lab, we wanted to ask…really focus and say we're not going to do applications research. We’re going to do fundamental research. We really need to drive towards things that will change how we do AI, not simply apply to existing problems and our portfolios. We have about 50 projects right now.

The other key piece of that from a structural level is we don't decide one course and then just stay on it slavishly. We didn't just say, “Okay 10-year, it's a quarter billion dollars over 10 years, Go.” It's really a rolling portfolio. We are constantly curating. We bring on new exploratory projects every year. We've designed our programs so that we can always bring in new projects. And then if those projects are doing well and we feel like we're actually making a difference, we can continue those projects [for an] arbitrarily long [time]. But we think that having that dynamic lifeblood where we're constantly working with different parts of MIT—whoever has the best ideas at any given moment, that's where we're going to go.

You know I'm really intrigued by transfer learning because it seems to me you can take a toddler, show them four pictures of a cat.  Well they seem to recognize cats, but if they see a manx cat—one of those without a tail, they'll say “Look there's a cat without a tail” even though they'd never been told there was something like a cat without a tail, like that wasn't even a real category.

Likewise, when I come across bots that purport to be kind of like Turing test sort of like had your question in them, I ask [them] “What's bigger a nickel or the sun?” And I never had a single one answer that question correctly. Both of those seemed to be part of what you were talking about earlier—that commonsense model of the world.

I'm trying to grasp the level of problem you're looking at, because at a high level it's ‘how do humans learn?’ At a low level it’s ‘how do you tell the difference between a dog and a cat?’ And then there's every level in between. Is that problem of how does transfer learning work or how do we generalize the world or how do children learn with so little simple [information]? Is that the kind of granularity of the problems you're thinking about or are those too broad or too narrow?

No. That’s exactly where I think I need to go. This notion like just think about all those. So transfer learning is absolutely the right framing, but sometimes it gets or implies a narrowness. Like how we transfer from handwritten digits to digits on street signs or something like that and that's fine and there's some work going on. That's very good and very helpful in that area. We do some of that work. We had a paper in NeuroPs specifically on that issue of transfer learning, but I think where we're going or where we need to go is much more about structure and common sense. Like how we build a system that when it sees the cat, it doesn't just see a collection of texture or fur and stuff and say, “OK, statistically it's probably a cat.”

For instance, one thing I sometimes show in talks I give, if you take there's a picture, there's a piece of art and MoMA called Le Dejeuner en Fourrure which is luncheon and fur. It's a fur covered teacup saucer and spoon. It's kind of a little bit of an unsettling image. If you show that to many of today's state of the art supposedly superhuman image recognition systems, they'll tell you it's a teddy bear or something similar. The image is clearly not a teddy bear. I'd say it's the least teddy bear image you could possibly imagine. What's happening is the neural network is sort of seeing the textures and you know the closest thing that it's seen in its data set before was a teddy bear. That's what it settles on.

But as you say, what we do is we look at the structure of the things. OK. That's an object [and] it's shaped like this. It's weird. It's not like anything we've ever seen before. You know [with] a cat or a dog, you can see a statue of a cat or dog—it could be metallic, and it could be pink and polka-dotted. You don't have to see something like that before, for you to be able to reason about the shape of the object, the material properties of the object, configuration of its parts. That flexibility, that abstraction of real structure, and then bringing to bear all kinds of knowledge—I think that's a big part of the magic of what you use—the word ‘magic’ again.

That's a big part of the magic of how we think that's different from how today's Narrow AI Systems, today's deep learning systems work. We're very much interested in how you abstract that structure and then critically be able to reason about that structure. And I think this notion—we like to separate in our own internal discussions between learning and reasoning. Learning is abstracting that statistical structure but then being able to flexibly reason over it, to be able to draw connections or work in settings you haven't seen before. I think that's a big part of the nut that we're trying to crack within this lab.

I haven't heard you mentioned anything about embodying it. Do you think that intelligence needs to be embodied in some form that kind of gives it this feedback loop where it can create a model of the world, or can it all be done in memory?

That's a great question and I think it's anybody's guess. I mean we're not working heavily in the area of robotics right now in actual physical robotics, but we do have projects that work heavily in simulation and I do think having the ability to act on the world… we do also interact quite a bit with psychologists who study how children learn and how they interact in the world. And I think that's a really interesting and valuable perspective because there is quite a bit of active learning that goes on—active perception where you know that you move the object and you can infer properties about it so in simulation at least, that's absolutely something that we are very interested in working on.

I wrote a book about whether computers can become conscious, so I'm deeply interested in the question, but I do think that consciousness gives us a few powers that it's hard to see their analog in computers, and one is we can change focus, right? You can look at part of the image and you kind of think ‘oh maybe up there…’

I'm sure you're familiar being a neuroscientist with the whole range of experiments where you show a little kid—you take an object and you put it in a drawer and their dad is in the room. They see it go in the drawer and then where's the object? It's in the drawer but there's an age at which the object can be in the drawer, but the dad didn't see it. And you can ask a kid “Where does your dad thinks the object is?” And the child is able to put themselves… in the dad's place and say well he didn't see it get put in the drawer. He probably thinks it still [is] wherever.

That being able to kind of shift focus also seems to be something that I can imagine what somebody else is thinking because I have a concept of them as a self as an entity. I've always thought that in the back of my head, at least I think both of those can kind of be faked in code but…

I guess all this is building up to ‘Do you think there are going to be some tricks we learn—some like ‘Aha!’ moments where we make these quantum advances, or do you think it's a long slog of these tiny incremental… we slightly get better, slightly get better? is there going to be some ‘apple fall on somebody's head’ moment where we suddenly, Aha! We get it and we can shoot way ahead?

The history of science has been one of punctuated equilibrium. We make rapid advances and then we slow down and sometimes we often go in the wrong direction. And historians of science have written about this again and again and people studied phlogiston and PhD theses were written about phlogiston and the material that comes out when you burn something, and it turns out that that was all wrong. It turns out that this was oxidation and burning and finally, when we understood that, then progress moved very quickly.

You know we're going to have that in AI too or we're going to maybe make incremental progress in the wrong direction now because we we've made that punctuated leap and now we're stirring around it, we find our next punctuated leap.

I think the things you're talking about… the idea of it's called theory of mind where you understand or you have a model that other agents think. I think the part for me that feels like we're driving towards is it's really about modeling the world. And that you can model the world in a very near-term sense of how's the object and look a moment from now, or if I push something, how will it move?

And I think if we take that idea of building models in the world that let us predict what's going to happen next, that naturally takes us to a place where we start to build models of people too, build models of other agents to predict how they're going to react in different situations; how they're going to behave given what they know, based on what we've observed of them knowing.

I think that modeling framework is really going to be the guiding light for a lot of research and how that progress moves, then we're going to have insights that are going to drive us and we’re going to find new techniques that drive us, and it's going to be very hard to predict. And I think progress is going to be choppy, but I feel like that framing where we're really just asking how do we build models of the world that are useful, that let us plan what to do next to achieve goals—that's going to for at least the next 10 years. I think that's going to be a huge principle in the progress of AI.

Well that's a great place to leave it. Thank you so much. If people want to keep up with what you are doing, what's the best way to do that?

You can go to our website – mitibm.mit.edu and of course, we’re publishing a lot and we'll be posting all our work there and we're very excited to share and interact with the community at large.

And what about you personally? Do you give talks? Do you have a Twitter account? How can people keep up with you?

I do talk and I occasionally blog, but I am on Twitter. My Twitter handle is @neurobongo and you’re welcome to follow me there.

All right. It’s been fascinating. Thank you, David.

Thank you.