Episode 79: A Conversation with Naveen Rao

In this episode Byron and Naveen discuss intelligence, the mind, consciousness, AI, and what the day to day looks like at Intel. Byron and Naveen also delve into the implications of an AI future.

:: ::

Guest

Naveen Rao is an Electrical Engineer, Neuroscientist, and currently works for Intel as Corporate VP and General Manager of Artificial Intelligence Products Group.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, and I'm Byron Reese. Today I'm excited that our guest is Naveen Rao. He is the Corporate VP and General Manager of Artificial Intelligence Products Group at Intel. He holds a Bachelor of Science in Electrical Engineering from Duke and a Ph.D. in Neuroscience from Brown University. Welcome to the show, Naveen.

Naveen Rao: Thank you. Glad to be here.

You're going to give me a great answer to my standard opening question, which is: What is intelligence?

That is a great question. It really doesn't have an agreed-upon answer. My version of this is about potential and capability. What I see as an intelligent system is a system that is capable of decomposing structure within data. By my definition, I would call a newborn human baby intelligent, because the potential is there, but the system is not yet trained with real experience. I think that's different than other definitions, where we talk about the phenomenology of intelligence, where you can categorize things, and all of this. I think that's where the outcropping of having actually learned the inherent structure of the world.

So, in what sense by that definition is artificial intelligence actually artificial? Is it artificial because we built it, or is it artificial because it's not real intelligence? It's like artificial turf; it just looks like intelligence.

No. I think it's artificial because we built it. That's all. There's nothing artificial about it. The term intelligence doesn't have to be on biological mush, it can be implemented on any kind of substrate. In fact, there's even research on how slime mold, actually...

Right. It can work mazes...

… can solve computational problems, Yeah.

How does it do that, by the way? That's really a pretty staggering thing.

There's a concept that we call gradients. Gradients are just how information gets more crystalized. If I feel like I'm going to learn something by going one direction, that direction is the gradient. It's sort of a pointer in the way I should go. That can exist in the chemical world as well, and things like slime mold actually use chemical gradients that translate into information processing and actually learn dynamics of a system. Our neurons do that. Deep neural networks do that in a computer system. They're all based on something similar at one level.

So, let's talk about the nematode worm for a minute.

Okay.

You've got this worm, the most successful creature on the planet. Seventy percent of all animals are nematode worms. He's got 302 neurons and exhibits certain kinds of complex behavior. There have been a bunch of people in the OpenWorm Project, who spent 20 years trying to model those 302 neurons in a computer, just to get it to duplicate what the nematode does. Even among them, they say: “We're not even sure if this is possible.” So, why are we having such a hard time with such a simple thing as a nematode worm?

Well, I think this is a bit of a fallacy of reductive thinking here, that, “Hey, if I can understand the 302 neurons, then I can understand the 86 billion neurons in the human brain.” I think that fallacy falls apart because there are different emergent properties that happen when we go from one size system to another. It's like running a company of 50 people is not the same as running a company of 50,000. It's very different.

But, to jump in there… my question wasn't, “Why doesn't the nematode worm tell us something about human intelligence?” My question was simply, “Why don't we understand how a nematode worm works?”

Right. I was going to get to that. I think there are a few reasons for that. One is, interaction of any complex system – hundreds of elements – is extremely complicated. There's a concept in physics called the three-body problem, where if I have two pool balls on a pool table, I can actually 100 percent predict where the balls will end up if I know the initial state and I know how much energy I'm injecting when I hit one of the balls in one direction with a certain force. If you make that three, I cannot do that in a closed form system. I have to simulate steps along the way. That is called a three-body problem, and it's computationally intractable to compute that. So, you can imagine when it gets to 302, it gets even more difficult.

And what we see in big systems like in mammalian brains, where we have billions of neurons, and 300 neurons, is that you actually have pockets of closely interacting pieces in a big brain that interact at a higher level. That's what I was getting at when I talked about these emergent properties. So, you still have that 302-body problem, if you will, in a big brain as you do in a small brain. That complexity hasn't gone away, even though it seemingly is a much simpler system The interaction between 302 different things, even when you know precisely how each one of them is connected, is just a very complex matter. If you try to model all the interactions and you're off by just a little bit on any one of those things, the entire system may not work. That's why we don't understand it, because you can't characterize every piece of this, like every synapse… you can't mathematically characterize it. And if you don't get it perfect, you won't get a system that functions properly.

So, do you say that suggesting by extension that the Human Brain Project in Europe, which really is… You're laughing and nodding. What's your take on that?

I am not a fan of the Human Brain Project for this exact reason. The complexity of the system is just incredibly high, and if you're off by one tiny parameter, by a tiny little amount, it's sort of like the butterfly effect. It can have huge consequences on the operation of the system, and you really haven't learned anything. All you've learned how to do is model some microdynamics of a system. You haven't really gotten any true understanding of how the system really works.

You know, I had a guest on the show, Nova Spivack, who said that a single neuron may turn out to be as complicated as a supercomputer, and it may even operate down at the Planck level. It's an incredibly complex thing.

Yeah.

Is that possible?

It is a physical system – a physical device. One could argue the same thing about a single transistor as well. We engineer these things to act within certain bounds… and I believe the brain actually takes advantage of that as well. So, a neuron… to completely, accurately describe everything a neuron is doing, you're absolutely right. It could take a supercomputer to do so, but we don't necessarily need to abstract a supercomputer's worth of value from each neuron. I think that's a fallacy.

There are lots of nonlinear effects and all this kind of crazy stuff that are happening that really aren't useful to the overall function of the brain. Just like an individual neuron can do very complicated things, when we put a whole bunch of [transistors] together to build a processor, we're exploiting one piece of the way that transistor behaves to make that processor work. We're not exploiting everything in the realm of possibility that the transistor can do.

We're going to get to artificial intelligence in a minute. It's always great to have a neuroscientist on the show. So, we have these brains, and you said they exhibit emergent properties. Emergence is of course the phenomenon where the whole of something takes on characteristics that none of the components have. And it's often thought of in two variants. One is weak emergence, where once you see the emergent behavior, with enough study you can kind of reverse engineer… “Ah, I see why that happened.” And one is a much more controversial idea of strong emergence that may not be discernible. The emergent property may not be derivable from the component. Do you think human intelligence is a weak emergent property, or do you believe in strong emergence?

I do in some ways believe in strong emergence. Let me give you the subtlety of that. I don't necessarily think it can be analytically solved because the system is so complex. What I do believe is that you can characterize the system within certain bounds. It's much like how a human may solve a problem like playing chess. We don't actually pre-compute every possibility. We don't do that sort of a brute force kind of thing. But we do come up with heuristics that are accurate most of the time. And I think the same thing is true with the bounds of a very complex system like the brain. We can come up with bounds of these emergent properties that are accurate 95 percent of time, but we won't be accurate 100 percent of the time. It's not going to be as beautiful as some of the physics we have that can describe the world. In fact, even physics might fall into this category as well. So, I guess the short answer to your question is: I do believe in strong emergence that will never actually 100 percent describe…

But, do you think fundamentally intelligence could, given an infinitely large computer, be understood in a reductionist format? Or is there some break in cause and effect along the way, where it would be literally impossible.  Are you saying it's practically impossible or literally impossible?

...To understand the whole system top to bottom, from the emerging...?

Well, to start with, this is a neuron.

Yeah.

And it does this, and you put 86 billion together and voilà, you have Naveen Rao.

I think it's literally impossible.

Okay, I'll go with that. That's interesting. Why is it literally impossible?

Because the complexity is just too high, and the amount of energy and effort required to get to that level of understanding is many orders of magnitude more complicated than what you're trying to understand.

So now, let's talk about the mind for a minute. We talked about the brain, which is physics. To use a definition that most people I think wouldn't have trouble with, I'm going to call the mind all the capabilities of the brain that seem a little beyond what three pounds of goo should be able to do… like creativity and a sense of humor. Your liver presumably doesn't have a sense of humor, but your brain does. So where do you think the mind comes from? Or are you going to just say it's an emergent property?

I do kind of say it’s an emergent property, but it's not just an emergent property. It's an emergent property that is actually the coordination of the physics of our brain – the way the brain itself works – and the environment. I don't believe that a mind exists without the world. You know, a newborn baby, I called intelligent because it has the potential to decompose the world and find meaningful structure within it in which it can act. But if it doesn't actually do that, it doesn't have a mind. You can see that… if you had kids yourself. I actually had a newborn while I was studying neuroscience, and it was actually quite interesting to see. I don't think a newborn baby is really quite sentient yet. That sort of emerges over time as the system interacts with the real world. So, I think the mind is an emergent property of brain plus environments interacting.

So, in that view of the world, a computer could have a mind or not?

Absolutely, yes.

Interestingly, just to put a pin in what you just said, you said the baby wasn’t sentient?

Yeah.

That technically means able to sense or able to feel pain. People actually used to think by the way… up until the ‘90s we did open heart surgery on babies without anesthesia for that very reason. But I am assuming you...

What I mean by sentient is the signals of nociception go to the brain and drive what we call pattern behavior. That is absolutely true in a baby. What I don't believe is necessarily true is that there is a self-awareness… there is not a mind that's formed yet that actually perceives in the same way that an adult does.

OK. Let's talk about consciousness. I like definition, so I'm going to call consciousness the experience of being you. it's the difference between a computer's ability to measure temperature and your ability to feel warmth. Does that come from the mind? The brain or the mind, we won't worry about that. That is a mechanistic. Do you think that's a mechanistic property?

Well, it's a function of the mind, not just the brain. The brain is part of the mind, as I said. The coordination between how the brain itself is physically modified with the real world is the mind… in my verbiage. So, the brain is part of it. The nociceptive signals arrive at the brain, or the temperature signals arrive at the brain, go through some sort of processing hierarchy, which is interpreted relative to the experience of that system, that brain, throughout its life… that actually made it have a mind. All these things hang together in a framework. It's not that one thing negates the existence of the other. It's that you need necessary ingredients to get to a mind, and a mind is what actually feels these sensations. So, in neuroscience we discriminate between what we call the signal or the percept. I think that's what you're getting at. The percept is a function of the mind, and we don't really talk about that in neuroscience because it's not really testable, or easily testable. The sensation or the signal is what we call a physical thing that can be tested.

So, you're saying consciousness is the percept?

Correct. Yes.

So, just that idea that matter can have a first-person experience of something, that matter can experience the world… It's hard, because I don't think my iPhone perceives the world. I don't think it perceives or experiences the world one one-millionth of the way I do. It doesn't even begin to register. So, what do you think is going on in us that is different?

I will answer that, but I'm going to walk you through a thought experiment first. OK, you said your iPhone doesn't experience the world. I probably agree with that. Does a bee experience the world? I don't know. Maybe a little bit. Does a lizard experience the world? I don't know. Maybe a little bit more. Does an alligator? I can walk up the hierarchy of animals and ask: “Do you actually believe this animal experiences the world and this one doesn't?” If I said dog, you'd probably say yes. If I said chimpanzee, you would say definitely. So, there is definitely a gradient here.

Okay, but to pause there… I'll grant that. I'll say all life has some degree of experiencing the world. So, why isn't my iPhone alive?

Well, that's what I was going to get to. Each one of those systems that I've described, all the way from the bee to the chimpanzee, they adapt to the world… to a different degree. Your iPhone doesn't really adapt to the world with any kind of a quick loop.

But my NEST thermometer does.

Yeah, to a certain degree, it does. On a few parameters, it does. So, maybe you could argue that it does have some sort of experience of the world.

But you chuckle when you say that because it's preposterous to think, isn't it? You didn't chuckle when you said bee. You said maybe. But, hah, maybe the NEST, sure… because intuitively we feel, “No, of course not. It's a bunch of screws and whatever.”

That's right. But I think that's the piece that we have to start thinking differently about… that machines can start adapting to the world, and actually start developing this kind of mind if you will, where you have the hardware, and then you have the experience that modifies the physical hardware in some way. Or it could be software, which is still modification of physical hardware, and that actually leads to a mind that experiences. It's not going to be the same degree of experience that a human being has, because our brain is actually very, very adaptive. But it does have some kind of experiential view of the world.

What about systems like an ant colony or a beehive, where the system itself as a whole behaves… where the system as a whole adapts, and it responds... I grew up as a beekeeper. I raised bees. I used to be really interested in bee lore. There's an old belief – kind of “nod nod, wink wink” – that if a beekeeper ever dies, you must go tell the beehive. You have to go out there and say, “You know, your keeper's dead. I'm the new keeper.”  No bee understands, but somehow, collectively, the swarm does. Is that possible? Not specifically the bee [example], but that these sorts of disconnected living systems could have their own form of consciousness in the way you see the world?

It's not only possible, it is. I don't think it's fair to call them disconnected. There are chemical signals. Bees interact in a coordinated fashion. Each bee doesn't really know what's going on necessarily, but the whole system does. It can react to threats. It can actually do global optimization. Our brain is not so dissimilar. Our neurons are kind of… call them dumb or whatever you want… individually they're not going to know what the whole system is doing, but they interact in a way in a system that makes them adapt and give rise to this emergent mind. There's a whole theory by Christof Koch.

Of course, IIT [Integrated Information Theory].

Yes, exactly.

He was on the show. So, would you say there is any linear association? We have 86 billion [neurons]; there are 50,000 bees in a hive. One bee doesn’t equal one neuron, but is that any indicator to you of the relative experience of those two systems?

I think it's difficult to compare those two, because each individual element is so different. But I think there is some loose linearity between them. The way to look at it, as I look at it, is in informational capacity or representational capacity. Each element can have so much noise associated with it and has a distribution of how it can behave. That can represent something about the world. That's true with a neuron; it's true with a bee. And when you put so many of them together and they have so much interaction bandwidth between them, they can represent so much information as a system.

This is how I look at the brain or look at any of these complex systems. How much informational capacity does it have? The big breakthrough we're seeing today with deep neural networks is really about the ability to represent information in this stacked hierarchical sense, which is how the world is really built, and we can actually represent much more complicated scenarios about the real world, rather than trying to represent everything flatly.

I promise I'll get to AI in a minute, but it's so fascinating to understand how you view the world, because I'm going to say, “So, take all that together, how do we teach computers to think?” I'm almost there. I have just a couple more… You know, when Christof was on the show, I said, “Well, sperm whales have bigger brains than we do.” And he said  they may actually be smarter than us.

They might be. But, do you know what's the difference?

Language.

Language, exactly, the “.io” between each one of these local systems.

So, let me give you three more. How about the Gaia hypothesis? How about all living things, all the trillions and trillions of living creatures on this planet, together form a single consciousness that just like our neurons are unaware of you… your neurons don't know you exist… We can't really perceive it, but we can posit that it's there and has a will and a direction and all of that of its own… Is that possible?

Again, I think it's not only possible, I think it's true. If you have any complex system where each individual element is not siloed but does have some interactions with other elements, you have this property. If you free your thinking, talk about what a mind is in terms of experience on top of a complex system, it makes it a little more obvious. The word mind has religious implications and all kinds of rationality associated with it. So, that makes people feel uneasy about these things. But if you put that aside and say, “You know what, it's just a complex system interacting with the physical world, then it actually says, “Well, OK, the mind's not so complicated.” And that's true for any complex system. I absolutely believe in that.

Right. But that doesn't get you to consciousness. We could imagine a complex system that interacts with the world that doesn't perceive and experience the world. I mean, the minute I think my car is conscious, I can no longer floor it… I can't use it to pull an overweight load. I have to empathize with it if it's experiencing the world… when it runs out of gas, like that's a tragedy. I have to think, “Oh, I let it down.” Right now, we don't think that way about non-living things. 

Sure. But anything is an adaptive system. Cars are adaptive systems too by the way.

Right. Two more, do you think plants have minds?

By this definition, yes.

OK

Again, their experience of the world is much simpler, because the levels of adaptation they have are just much lower than what an animal brain can do.

I’ll give you the last one… the sun.

The sun? Is it a living system?

Could it be?

I'm going to say no.

Okay. Because Stephen Wolfram was on the show, and he said he thinks the weather does have a mind of its own… that a hurricane, in a sense because computation is occurring… it is. I'm not saying I believe the sun has a mind. I do think it's interesting that culturally independent of each other all children when they draw pictures of the outdoors… what do they put on the sun? A smiling face.

Oh, I see.

At some level there's some deep association of it with a mind. So, all of that to say, do you think about this kind of stuff on your day-to-day, when you're trying to get computers to solve problems? Whatever problem you're trying to get solved… does all of this guide you? Or is your constant: “Our brains are too complex. We can't do all this. We’ve got to think of intelligence in a completely different way. Is that your guiding light?

Yeah, I think it's a guiding light. I wouldn't say it's an everyday thing. There's a lot of minutia that goes along with trying to build something that becomes a product you can build these very complex systems out of. So, I think there is a guiding light of saying, “Well, how can we build a complex system that adapts to the world in a way that is similar – or in your verbiage that you would have an empathy for – and solves real problems today that we can have economic value out of? So, there's a lot of pieces to that, right. But I do think that is where we're trying to go.

It's sort of the geek in me from the science fiction lore I read as a child over and over again. That's what kind of guides me and motivates me to move forward here – that we can build a better world, where we have not just biological intelligences, but also purpose-built intelligences that maybe we have some empathy for, but they have a lot in life which they're happy to do, which maybe we don't want to do. And we can build a more complex but a better system for ourselves.

Is that itself though morally problematic… to build a race of slave machines and program them to do your will, and to whistle while they clean your toilet… while they experience the world?

So, the way that was posited was very ethnocentric, right. Is it a “slave machine” because we don't want to do those things? Would you say that a bird is a slave because he has a fly and go get food? Maybe he wants to fly because that's what he can do. So, I think we sort of anthropomorphize these things to a degree where we then apply a moral framework that applies to other humans because of our shared wants and desires.

You're right. But I guess what I'm trying to say is that if you built a machine that did experience the world, that had a mind, that had wants and dreams and is whatever percentage of a human… are there any moral obligations you have to that device?

I think that's a very hard question to answer. Probably yes. I don't think it's as simple as… I can't reflect on it just back on myself. I think that's the key, that we must think about it like, “Well, what was the purpose of this?” I would argue that humans really only have a few purposes. We have the purpose of survival and the purpose of procreation. Right? That’s pretty much it. We have imbued it into our laws that those two things are what we as a human race are free to pursue. We would never put rules around those things. There have been attempts in the past, of course, but that we see as a basic human right. And so, we must come up with different rights for these engineered systems. They're not necessarily going to be the same as ours. And if you operate them outside of that maybe it does cause some distress. So, it's a different kind of empathy that we must have for the other kinds of intelligences we build. It's not so simple as applying it back on our human empathy.

But do you see a day where a machine… It sounded, as we were building up from the nematode worm to human consciousness, that every time I asked, “Can a machine do that?” You [answered],  “Sure.” … almost like it's not a question. So, not to put words in your mouth, but presumably a machine can be conscious.

Absolutely. Yes.

Do you believe that we are machines?

Yes.

So, we're conscious machines.

Yes.

Because we're conscious and experience the world and all of that, we have these human rights. We have things like: You can't torture a person. You can't abuse animals that we believe can feel pain. So presumably those same principles would guide how we treated conscious machines as well?

That's right, absolutely. And, interestingly enough, I think it should even guide how we treat other animals. I have human mammals at home. I have feline mammals at home, and felines are in a lot of ways similar to people, We can anthropomorphize and empathize with them and say if you step on their tails, it probably hurts and you shouldn't do that.

I also have a parrot, and birds are quite different than mammals. I think what we associate as OK is not necessarily OK with them. And some things that we think aren't OK are probably OK with them. The way they interact and the way they want to interact with other beings is actually very different than mammals. I think we even have examples of this on our planet today, but we just really like to live in our own head and empathize what we feel is right from a human perspective. That's not even true for other animals on the planet.

Let's say we wanted to avoid having conscious machines because you don't want to have to worry about things like… that SWAT team robot that's got to go in and dismantle that bomb...

Yeah.

Would there be in your mind any limit to the complexity of a machine we could build and manage to keep it from being conscious?

Well, that's an interesting question, and sci-fi has gone after this numerous times. In Blade Runner, they were retired after a certain amount of time because they start to become conscious. I think complexity and consciousness don’t necessarily go together. It's complexity plus adaptability will equal consciousness. So, if you can build a machine that does a rote task, even a very complex rote task, and it doesn't have a whole lot of adaptability in it, I can argue that it's not very conscious. And that's kind of what we're building today. We build inference machines that can plow through bunches of data and run very complicated perceptual models of computer vision. They're not really conscious because they're not adapting. They're not changing continuously with the environment that's presented to them. They're static. When you build a very complex system that is adaptable to its world, I do believe it's unavoidable that it'll become conscious.

You mentioned science fiction a number of times. What inspired you?

Yeah, you know… space was the big thing when I was a kid. I was born in 1975, so in the early ‘80s people were talking about space.  But it was very interesting that if you look back at science fiction, the time of space and AI actually went together. We had Arthur C. Clarke… the whole 2001 series… It was almost like we had to have some kind of AI system to do these bigger things like go into space because it's so complicated. You see that very commonly throughout many of these different sci-fi books.

And so, I just found it extremely fascinating that we have this example of the machine that's an incredible computer — our human brain, or maybe a whale brain, depending on which one's smarter. We really don't understand it. We can't even take concepts from it yet, like we can by watching a bird and building an airplane. We couldn't do that with a brain yet. We're getting there, and I think that pursuit is what has motivated me.

That's what my career was about, even as an undergrad, I was EE&CS [Electrical Engineering and Computer Sciences], but I did a lot of computational biology stuff, like looking at neuromarketing computing and things like that, really trying to find that inspiration. And it's amazing to me to be alive at this time because those fields are actually coming together. That's why I found it very fascinating that CS and AI actually went together for a long time. We took our inspiration from what we understood about the phenomenology of the brain and trying to build computers around it.

And now we've gotten to a deeper depth of our understanding of the brain. We may have hit on some computational building blocks that allow us to build these brains. Just like we figured out how to build a wing that could actually have sustained flight like a bird, I think we've kind of hit upon those basic building blocks.

So, we had this new trick in our AI quiver of machine learning fueled by faster computers, better data collection and frankly better tool kits. What are the limits, do you think, of that model? I find it hard to believe that model is going to build something that can pass the Turing test. But it's really good… you know, its central assumption is the future is like the past, because we're training it on data… so a cat tomorrow looks the same as a cat today. What do you think the limits are to what we can do with machine learning? Could we build a general intelligence with machine learning? Or is it just kind of a next little trick, and there are a bunch more tricks we're going to need to discover along the way?

Well, if in machine learning you’re including deep neural networks and neural networks in general, I do think you can get there. Now, I think the fundamental limitation is not the posing of the problem as a neural network. It's the physical architectures of the computing elements that were built. So, I do believe you could simulate a consciousness within a standard Turing machine computer. However, it will be so inefficient that you would need two suns to power it or something like that. So, I think that's really where we're at right now. It's not about the capabilities and lack of understanding. I do believe the path we're on will get us there. There are a lot of bumps along the way. It's not going to happen tomorrow. It's at least a 50-year endeavor, in my mind. But it's not about what we have today is fundamentally broken, it's that what we have today is fundamentally way too inefficient.

So, whenever I come across some chatbot that doesn't purport to pass the Turing Test but purports to be able to answer questions, I ask the same question. And so far, zero. I've never gotten it. The question is: What's bigger, a nickel or the sun? You know instantly why that's a hard question for that chatbot. What's our practical path out, because I heard you say, “Look, give me two suns and I'll make you something that can answer that question.” What's our practical path out of that?

It's economic feasibility. This is what it comes down to. And this is true in nature too. You can't build a system that's so inefficient that it would need to eat all the other animals on the planet to work. It's not going to happen. So, nature evolved a very efficient system like our brain to underlie these computations. And I think we're getting there. We're at least five orders of magnitude away in terms of the efficiency of the brain.

So, I think the path to getting that chatbot to be more efficient is to make something that is economically viable in a stepwise fashion. We can go from today to tomorrow to the next day and actually build something that makes sense for a market that people will pay for and doesn't take two suns to power. We're never going to build something it takes two suns to power. That will never actually occur. So, it must be along this path of economic viability, and that's why I love what I do, because I get to be part of that. We are building new architectures that are tremendously more efficient for these kinds of computations.  That's what keeps me going every day and waking up in the morning… is being on that path to building an intelligence that could actually be realized in the real world.

So, the brain uses 20 watts of power, and your five orders of magnitude (10,000 times) means I get to, what is that, 200,000 watts.  But even with 200,000 watts, you couldn't do it, right?

No, it's actually 100,000. I said five orders, right. So, that would be 100,000. So, yeah,  two million watts… you're probably right. You probably can't. That's why I said at least five orders of magnitude. So, it's probably more. We're very far away still from being able to accomplish that in a system that's viable. No one's going to build a single system – well, I wouldn't say no one because we’re potentially doing some of these things already – but it's unlikely that people are going to build multiple systems out there that are megawatt order systems to solve one task.

 

So, we do transfer learning really well. I could say to you, ”Naveen, I want you to imagine a trout swimming in a river. I want you to imagine a trout in a laboratory. Are they the same temperature?” And you would probably say “Probably not.” Are they the same color? “Maybe.” Do they smell the same? “Oh, definitely not.” You don't have first-person experience with that situation, but you can instantly answer all those. Do they weigh the same? “Of course.” What do you think we're doing there, and how do we duplicate that in computers?

Yes. I think this is the next generation of the path we're on… is building a simulation of the world that we build up from experience. Then we interact with a learning system to learn more about that system. So, for instance, we actually built a physics model of the world. I was a motor control neuroscientist and we have very detailed models of our arms and limbs and everything in our brain. This was built up over time. It's argued that when we sleep, we actually run through different scenarios against those simulations. And we have a reinforcement learning system that says, “Oh, this is a good outcome,” or “That was a bad outcome.” But I can run many, many more cycles in my brain than I could run in the real world because energetically it's cheaper, and I can learn against those things.

And we're seeing that happen already today. OpenAI published a thing where they're learning the dynamics of Dota. The Deep Mind guys published a reinforcement learning system that learns from nothing. It's not told anything about a game and it can actually learn Go, it can learn chess, it can learn whatever… and actually beat a human by just playing against itself in the simulated world. It learns the dynamics of the world and is able to simulate different scenarios and learn from those scenarios. I think that's what our brain does very, very well, and that's the next generation of it.

That sounds like the logical step from your earlier comment that you thought minds had to be embodied…

Correct.

…that to some degree we need to embody these things. So, tell me about your day-to-day at Intel. What do you do? How many projects do you have going on? How do you decide what to work on? Do you do any pure science? What is it like there?

Well, the line between pure science and applied is pretty thin these days. I think we're also looking at that North Star of getting to an intelligent system, and we're figuring out ways that we can chip away at the problem. So, my group does do research. We interact also with Intel Labs, which does more far-flung research… five-to-ten-year-out research. My group does three-to-five-year-out research, and also straight development.

Then we have to translate that into what are emerging needs that we see within the next year or two, because we’re going to build Intel products for the next year or two that we're planning today. How we can help define the tools to make more people able to use these things. Today data scientists and AI researchers are a few thousand people around the world. If we can enable tools that allow a few million people to use these things, we'll make even faster progress. So that's a big part of what we want to do is build the best tools to get more people working on these problems. And then building the fundamental substrate that accelerates all of this, going in, chipping away at this problem.

I'll give you an idea of how far we've come just in the last couple of years. When I came into Intel two years ago, we were running on NVIDIA GPUs, which were six teraflops of computing power and about 250 watts. We're going to have a chip that is 50 or 60 times that next year as a product in the same power envelope. We took an order and a half of magnitude out of in just two years. It's kind of crazy, right. If we're chipping away at that five or six orders of magnitude that we talked about earlier, we're getting there. If we can take an order of magnitude every few years, we'll actually be able to get there in 30 to 50 years.

And so, that's where my head really is, and we're looking at… “Training” is a big one, because that's the adaptive system. Then “Inference” is the front end of that, where we see massive scale. And so, I’ve got to look at it from the perspective of markets and economic viability as well.

It sounds like you've got a left-brain, right-brain job. You've got to get inspired in one half of your brain, and then listen if the cash register rings in the other half?

That's a good way to describe it really. We're very motivated by the creativity aspect of it. I think great engineers and scientists are actually artists. The best breakthroughs I've ever come up with, I've sort of thought about in the background of my mind, and you sort of look at a problem in a new way. It's a very creative process, and we have a lot of that going on right now. But then, we also have the “go get shit done” and “plan it out” project management… all these things must happen. It's all to get products out the door. But they work together. We get products out the door that will inform how we build the next one, to which we can apply that creative process. So, it's a continual loop of innovation.

You know, there have been times in the past of great innovation, and the people who were at the forefront of it had a keen sense of the moment. I'm thinking specifically about the Manhattan Project. They wrote about it, talked about it, and discussed it… You seem to have that same sort of sense about yourself, that you're cognizant of the place and time you’re at. Would you say that's true of your whole team and all the people there? Is it like, “Man, we're lucky to be alive. We're lucky to have been born the moment we were, to be right here at Intel, to be in the middle of this great new revolution…” Or is it more like, “Where are we going to get lunch today?”

I think there is definitely a guiding light there. That's why they do it. A lot of people like me get up in the morning because of that bigger goal. It's not necessarily what you're worrying about every day, but it is why you're doing it. I think that's the culture I wanted to instill in this group — We're changing the world. We're at the precipice of a huge, massive change in humanity. And I tell the team this all the time. “You're part of it. You're building the next generation of what humanity will be. This is not a small thing.”

So, I'm an optimist about the future. I grew up on science fiction as well. I believe the credo of Star Trek. You know, Gene Roddenberry said in the future there'll be no hunger and there'll be no greed and all the children will know how to read. I believe all of that. I think that we use these technologies to multiply what people are able to do. We use them to solve historically intractable problems. You can probably sense the BUT coming up.

Yes.

BUT… there are two areas I worry about, and I wonder what your answers to these are. The first is that the whole notion of privacy we've had in the past, we've kind of had it because of just so many people. No government could follow everybody and listen to every conversation. But now, of course, you can do text-to-speech on every phone conversation. Cameras can read lips now as well as a human.  So, every video camera is essentially a microphone. My devices track where I go. I mean, you can data model me and every word I say using the same tools that are created for very good purposes like finding cures for diseases. So, do you believe that the era of privacy is over, that it's just incompatible with the world where you put so much of yourself out there – either deliberately or not deliberately – in a form that can be collected, data mined and studied?

I don't think it's over. In fact, I'm a big proponent of people being able to control those things very accurately. I think we've overcorrected a little bit right now. I'm not going to  name company names, but you already know who they are. I think it's taken a lot of liberties with people and preyed upon them not really understanding the value and consequences of sharing their data. I do not share that data unless I absolutely want to get a service for it. So, I think we need to build better technology to deal with these kinds of things. Things like homomorphic encryption and other technologies actually could get us to a point where we have control over our data that we have generated.

In addition to that, we also should have some kind of trust in the provenance of data, that when I'm looking at something it's really from the source I think it's from. We absolutely need to do this as a society or we risk losing I think the very basic pieces of what make us individuals. Now that again is a very now concept. Maybe in 200 years we'll feel differently, after machines and us interact in a much more fluid way. But today I think there is a distinct value in individuality, in credit for your work, because that's how our system works. So, we absolutely need to protect that. I'm a very, very big proponent of privacy, especially in AI systems.

So, on another topic, isn't it the case that the notion of explainability… because you keep saying “too complex, too complex, too complex…” that some conclusions AI comes to, we just won’t understand. If I went to Google [to advertise] pool repair or pool cleaning and asked, “Why am I number four and my competitor is number three?” They would rightly say, “We have no idea. How would we know out of 50 million pages why you are number four and they're number three?” If we insist on explainability, isn't there an inherent tradeoff in that?

Yeah, I think this whole notion of explainability will diminish with time. Systems are getting just more and more complicated. It's like you say, it's almost impossible already, and you know we don't do this with a human. When I ask a neurosurgeon who has been doing it for 20 years, “Hey, why did you use that stitch there?” or “Why did you make this decision in a split second?”, do I really care that I can get into their head and pull out the weights of their visual system and this, that and the other thing? No, I don't because I trust that system, that human has been trained sufficiently to make the right decision.

I think that's where we need to get to, more bounds around decisions, and “does this system tend to make positive and good decisions vs. not making good decisions?” I think that's where we really need to get, and we are getting there in certain areas. Like the visual tools they have in Google already, where you can do a photo search and stuff like that… Yes, there are biases and things like that, which they try to fix as quickly as they can. But we don't necessarily have to ask, “Why do you categorize this image this way?” It's OK. It doesn't matter. It is what it is.” And we'll say, “Hey, that's probably not right. Go back and fix it.” I don't need a full level of explainability that I had with a regression-based system.

So, I agree with you. Do you know how your credit score is calculated?

No… Exactly.

Nobody does. Right? It’s a completely opaque number, but it's somehow reasonably accurate.

Exactly. People will get used to it.

Right. Well, I feel like we're running out of time here. What did I not ask you about that you'd like to tell us about, something you're working on, something we should know about, an issue I didn't raise, just anything you want to close with?

Well, I think there are some interesting trends we're seeing today in terms of distributed computing. Computing for the last 50 years has really been about making occasional work on a single chip. I think going forward the application is distributed, and it's a really exciting time because I don't think everybody knows that yet. A lot of people are thinking about a single chip. You see it today in AI, like, “Oh, an NVIDIA GPU does this...” Do we really care what  one GPU does? Not really. It's more about making a discreet application work and bringing the complexity of that system up to a point where it can start doing interesting things.

And so, I think that's really where we're looking going forward. And we're going to see products coming out around that.  So, I'm very excited to see those come out, go into the real world, be deployed and start changing the face of what we call a computer and computing. It's going to be a really exciting time the next five years.

Well, Naveen, you're a fascinating guy. We could go on for another hour. Tell me, how do people follow you? How do people keep up with what you're thinking, about what's going on in that brain of yours?

Twitter is a good place – @NaveenGRao. Also, we are publishing blogs. I'm starting a blog series that I'm personally writing, and so I'll be promoting that on Twitter as well.

All right. We're linked to all your stuff, and thank you for your time.

Thank you. Wonderful conversation.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.