Episode 58: A Conversation with Chris Eliasmith

In this episode, Byron and Chris talk about the brain, the mind, and emergence.

:: ::

Guest

Dr. Chris Eliasmith is co-CEO of Applied Brain Research, Inc. and director of the Centre for Theoretical Neuroscience at the University of Waterloo. Professor Eliasmith uses engineering, mathematics and computer modelling to study brain processes that give rise to behaviour. His lab developed the world's largest functional brain model, Spaun, whose 2.5 million simulated neurons provide insights into the complexities of thought and action. Professor of Philosophy and Engineering, Dr. Eliasmith holds a Canada Research Chair in Theoretical Neuroscience. He has authored or coauthored two books and over 90 publications in philosophy, psychology, neuroscience, computer science, and engineering. In 2015, he won the prestigious NSERC Polayni Award. He has also co-hosted a Discovery channel television show on emerging technologies.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today our guest is Chris Eliasmith. He’s the Canadian Research Chair in Theoretical Neuroscience. He’s a professor with, get this, a joint appointment in Philosophy and Systems Design Engineering and, if that’s not enough, a cross-appointment to the Computer Science department at the University of Waterloo. He is the Director of the Centre for Theoretical Neuroscience, and he was awarded the NSERC Polanyi Award for his work developing a computer model of the human brain. Welcome to the show, Chris!

Chris Eliasmith: Thank you very much. It’s great to be here.

So, what is intelligence?

That’s a tricky question, but one that I know you always like to start with. I think intelligence—I’m teaching a course on it this term, so I’ve been thinking about it a lot recently. It strikes me as the deployment of a set of skills that allow us to accomplish goals in a very wide variety of circumstances. It’s one of these things I think definitely comes in degrees, but we can think of some very stereotypical examples of the kinds of skills that seem to be important for intelligence, and these include things like abstract reasoning, planning, working with symbolic structures, and, of course, learning. I also think it’s clear that we generally don’t consider things to be intelligent unless they’re highly robust and can deal with lots of uncertainty. Basically some interesting notions of creativity often pop up when we think about what counts as intelligent or not, and it definitely depends more on how we manipulate knowledge than the knowledge we happen to have at that particular point in time.

Well, you said I like to start with that, but you were actually the first person in 56 episodes I asked that question to. I asked everybody else what artificial intelligence is, but we really have to start with intelligence. In what you just said, it sounded like there was a functional definition, like it is skills, but it’s also creativity. It’s also dealing with uncertainty. Let’s start with the most primitive thing which would be a white blood cell that can detect and kill an invading germ. Is that intelligent? I mean it’s got that skill.

I think it’s interesting that you bring that example up, because people are actually now talking about bacterial intelligence and plant intelligence. They’re definitely attempting to use the word in ways that I’m not especially comfortable with, largely because I think what you’re pointing to in these instances are sort of complex and sophisticated interactions with the world. But at the same time, I think the notions of intelligence that we’re more comfortable with are ones that deal with more cognitive kinds of behaviors, generally more abstract kinds of behaviors. The sort of degree of complexity in that kind of dealing with the world is far beyond I think what you find in things like blood cells and bacteria. Nevertheless, we can always put these things on a continuum and decide to use words in whichever particular ways we find useful. I think I’d like to restrict it to these sort of higher order kinds of complex interactions we see with…

I’m with you on that. So let me ask a different question: How is human intelligence unique in the world, as far as we know? What is different about human intelligence?

There are a couple of standard answers, I think, but even though they’re standard, I think they still capture some sort of essential insights. One of the most unique things about human intelligence is our ability to use abstract representations. We create them all the time. The most ubiquitous examples, of course, are language, where we’re just making sounds, but we can use it to refer to things in the world. We can use it to refer to classes of things in the world. We can use it to refer to things that are not in the world. We can exploit these representations to coordinate very complex social behaviors, including things like technological development as well as political systems and so on. So that sort of level of complex behavior that’s coordinated by abstract symbols is something that you just do not find in any other species on the planet. I think that’s one standard answer which I like.

The other one is that the amount of mental flexibility that humans display seems to outpace most other kinds of creatures that we see around us. This is basically just our ability to learn. One reason that people are in every single climate on the planet and able to survive in all those climates is because we can learn and adapt to unexpected circumstances. Sometimes it’s not because of abstract social reasoning or social skills or abstract language, but rather just because of our ability to develop solutions to problems which could be requiring spatial reasoning or other kinds of reasoning which aren’t necessarily guided by language.

I read, the other day, a really interesting thing, which was the only animal that will look in the direction you point is a dog, which sounds to me—I don’t know, it may be meaningless—but it sounds to me like a) we probably selected for that, right? The dog that when you say, “Go get him!” and it actually looks over there, we’d say that’s a good dog. But is there anything abstract in that, in that I point at something and then the animal then turns and looks at it?

I don’t think there’s anything especially abstract. To me, that’s an interesting kind of social coordination. It’s not the kind of abstractness I was talking about with language, I don’t think.

Okay. Do you think Gallup’s, the red dot, the thing that tries to wipe the dot off its forehead—is that a test that shows intelligence, like the creature understands what a mirror is? “Ah, that is me in the mirror?” What do you think’s going on there?

I think that is definitely an interesting test. I’m not sure how directly it’s getting at intelligence. That seems to be something more related to self-representation. Self-representation is likely something that matters for, again, social coordination, so being able to distinguish yourself from others. I think, often, more intelligent animals tend to be more social animals, likely because social interactions are so incredibly sophisticated. So you see this kind of thing definitely happening in dolphins, which are one of the animals that can pass the red dot test. You also see animals like dogs we consider generally pretty intelligent, again, because they’re very social, and that might be why they’re good at reacting to things like pointing and so on.

But it’s difficult to say that recognition in a mirror or some simple task like that is really going to let us identify something as being intelligent or not intelligent. I think the notion of intelligence is generally just much broader, and it really has to do with the set of skills—I’ll go back to my definition—the set of skills that we can bring to bear and the wide variety of circumstances that we can use on them to successfully solve problems. So when we see dolphins doing this kind of thing - they take sponges and put them on their nose so they can protect their nose from spiky animals when they’re searching the seabed, that’s an interesting kind of intelligence because they use their understanding of their environment to solve a particular problem. They also have done things like killed spiny urchins to poke eels to get them out of crevices. They’ve done all these sorts of things, it’s given the variety of problems that they’ve solved and the interesting and creative ways they’ve done it, to make us want to call dolphins intelligent. I don’t think it’s merely seeing a dot in a mirror that lets us know, “Ah! They’ve got the intelligence part of the brain.” I think it’s really a more comprehensive set of skills.

Fair enough. I’ll ask one more animal kind of question before we dive into the specifics. I think the most fascinating intelligent animal is the octopus, because they have these really short lives, they’re solitary animals. It’s interesting—I read that there’s a Hawaiian myth that before the world that we know existed, there was other life here and it all died, except the octopus. It’s this one thing that’s left over from this primordial ancient time and that it’s a different sort of thing than us. Do you have any thoughts on the octopus?

Yeah, they’re super cool. I completely agree. I haven’t studied octopus intelligence in any degree. I was just reading about dolphins which is why I had those ones at hand, but yeah, I honestly don’t have a strong opinion, largely because I just don’t know much.

All right, let’s start with the human brain. You know as much about the thing as anybody, so you tell me: What do we know about the brain and what do we not know?

Wow. That could take a very long time on both topics. I’d like to say this actually because it might sound a little controversial: I think we know more about the brain than even neuroscientists tend to let on. The reason I say that is, I’ve read many books about the brain and people expressing their opinions about how parts of the brain work and so on, where typically the very first sentiment that readers run into, especially when it’s geared to a public audience, is that the brain is incredibly mysterious and we know almost nothing about it. But I think that does a little bit of a disservice to all kinds of interesting advances that have happened recently, both in neuroscience on the experimental side but also on the theoretical side.

I think we have a whole bunch of new tools and new ways of understanding what brains are representing, how they’re organized, how computation can be performed in brains, that really has put us in a position where we can understand these things much better than we have before. If I was going to say this in the most controversial way I could, it would be that it’s not obvious to me that we need theoretical advances in order to build a very sophisticated understanding of how the brain works. Instead, what we need are more engineering advances. That is, we need to apply the methods and theories that we have right now, at scale, to try to see if we can build them up to the point where they’re actually able to tackle something as sophisticated as the human brain. That’s just going to take a lot of time and effort and computation and model building and all kinds of things. But it’s not obvious to me at least, that we’re fundamentally missing some theoretical insight that’s going to let us understand how the brain functions.

Okay, let’s talk about that. I know that this is like me shooting free throws with Michael Jordan at this point, but I’m going to give my best shot here. So let me ask the first and simple question: How is a memory encoded in the brain? If I think, “What’s the earliest birthday I remember? Ah, I remember my fifth birthday. Oh, yes, we did this…” How is that encoded in the brain, and how is that retrieved?

Our best understanding of how memories are encoded is that it’s in the connections between the neurons. So there are proteins that get embedded into the cell walls, and the amount of proteins that get embedded determine how strongly two neurons can be connected. And if you change those connection strengths, then you can change the way that information is treated as it passes through these connections. So one way to encode long-term memories of the kind you’re talking about is to take a bunch of neurons and change their connection strengths such that the next time that you’re in the representational neighborhood that you were in when you encoded that information, you can retrieve it and fill in lots of background information about exactly what happened on your fifth birthday or whatever the long-term memory is that you’re trying to work with. There’s a process of encoding, which seems to go through the hippocampus where you do very rapid encoding of the day’s events, and then over time, that gets transferred up into association cortices which is where it becomes a long-term memory. And then when you asked about how do I retrieve it—essentially what it seems like you do is you partially construct a context which is going to be able to let you retrieve the remainder of that information, so you basically prompt yourself with something like “my fifth birthday” that goes into the hippocampus, it’s used as an address to that longer-term memory, then you go to the longer-term memory and both retrieve explicit information about that date as well as reconstruct plausible information about that date and then report that as the thing that you remember.

We’re pretty good at measuring activity in the brain, like where it’s happening and where things are firing and where they’re not, but one level deeper— It sounds like you’re saying it’s just like music is the space between the notes. It’s almost like a Zen sort of thing you’re saying. We can’t read a thought nor write a thought nor algorithmically express how a thought is stored nor do any of that kind of stuff. Nor do we even know how to. Is that not correct? Have people been holding out information on me all this time?

Well, so we can read and write thoughts in our models. We can’t necessarily read or write thoughts in the human brain, but that’s because, in fact, we don’t have very good tools for being able to manipulate those proteins. I think the kind of tools that you’re talking about, fMRI and things, they’re making very high-level amalgamations across millions and millions of neurons and showing which parts of the brain are most active during particular tasks. They’re not giving you access to what’s going on in all of the billions of connections between all these different parts of the system while you’re doing things like retrieving a memory. Right now we’re getting very large-scale views of these systems, and we don’t have ways of manipulating them such that we could actually go in and try to change all this kind of thing.

But isn’t it the case that all of the sudden they’re like, “Oh, my gosh, they’re glial cells. There’s just as many of those as neurons, and they seem to do their own thinking.” We still have these kind of monumental discoveries that seem to be almost transformative that don’t seem to indicate a system that we have a high degree of understanding.

Well, in any science, people get excited about all kinds of things. It doesn’t necessarily mean that they’re central to the question of interest. In the case of glial cells, I think there’s definitely lots of things we don’t know, but there’s been not a really strong connection between that and things like cognitive function or long-term memory or even learning, so the place where people try to make the biggest deal is in learning. But that doesn’t mean that if we give a description of learning where we say, “Oh, we’ve got all these neurons, they have connections between them and that learning is changing the weights,” we might not be able to provide a detailed mechanism for exactly how the weights are changed, and maybe glial cells play a role in that, but we would still have a very deep understanding of how the brain works if we just can say, “Well, weights are changed, and here’s a mathematical description of how they’re changed,” but we don’t have a mechanistic description of that yet. There’s all kinds of scientific knowledge to be had about materials and all kinds of chemical interactions and so on without going down to the quantum level and being able to spell out the quantum mechanisms. That doesn’t mean that we can’t have very deep understandings and slightly more abstract characterizations than what you might need to when you’re talking about things like glial cells and so on.

Fair enough. You made the comment that we’re in every single climate on the planet and that that may be indicative of our intelligence. I will also bring up the lowly nematode worm who’s also in every climate on the planet. As you know, I’ll just set this up for the listener: Seventy percent of all animals on the planet are nematode worms, and they’re these little bitty things that are as long as a hair is wide, and one of the very first things whose genome we sequenced. They had 302 neurons in their brain, two of which don’t seem to be connected to anything else. Anyway, for twenty years, people had been trying to model those 300 neurons which only have 10,000 connections between them—about as many neurons as a bowl of cereal—to try to build a digital nematode worm. And as I understand it, and you’re going to know this far better than me, but as I understand it, people in the project still say it may not be possible. Tell me, is that largely an accurate description of the Open Worm Project, and should we be further along if we understood the basics of how a brain works?

This is always an interesting example to bring up, and I think it actually might tell us something quite interesting about brains, how they function, and so on - what we’re trying to explain, what is our target of explanation. In the case of the nematode worm, you really have to ask yourself exactly what are you trying to explain or build. We have robots that can move around and interact with their environments and do all kinds of things that are similarly described at a behavioral level to the kind of the thing that you would associate with a nematode worm.

Now it’s interesting that when we go and we try to reconstruct every single cell in the nematode and connect them all together, that the system, that when we run it, it doesn’t seem to do anything. But we haven’t really reconstructed the environment of the nematode. It’s not clear that we are getting the inputs and outputs of the nematode correct in our digital simulations. It’s also not really clear if we have all of the knowledge that we actually need in order to reconstruct the nematode, namely, what are all the precise connection strengths between the connections? So we know what all the connections are and we can label them, but the thing that determines the dynamics of the system and how it interacts with its environment are also going to depend on exactly how they’re talking to one another and what strengths there are between them and so on. So if we want to say that the mechanisms of the nematode worm are spelled out in complete detail such that they can reproduce the behavior, it doesn’t seem to be the case that that’s true. But it is the case that we can come up with very similar kinds of mechanisms that perform similar kinds of behaviors. Again, I’m not sure which of these you want to count as an understanding or explanation.

The nematode worm displays certain behaviors that, taken together, have made it arguably the most successful animal on the planet. It can go towards light, it can find food, it can mate, it can move. It can do all of these things on its own volition, with its 302 little neurons chugging away. So I think the question that one wants to get at is, with such a tiny little problem, can’t you try all manners of weights and all manners of relationships and model it twelve ways to Sunday? Isn’t it interesting that we cannot make more progress towards that, or are you saying, “No, it’s not all that interesting. There’s some stuff we don’t know. Big deal. That doesn’t hold a candle to what we do know.” Just because you don’t know the last year to put in a watch doesn’t mean you don’t know how the watch works. What is your take on why we can’t seemingly reproduce, from the ground up, the behavior of the nematode worm, provided that we have this basic idea of how a brain works?

I think there’s a couple of things that are getting wound together here. One is if we’re talking about intelligence, then mere success at being all over the place isn’t what I was proposing as a measure of intelligence. I was pointing out that humans are able to use their intellectual capacity - one reason that we think they’re intelligent is because they can use them to solve difficult problems no matter how we characterize them across all these sorts of contexts. To be highly survivable, that’s also a good thing, but it’s clearly not an indicator of intelligence. Bacteria are also highly survivable, as are viruses and so on. So I don’t think we want to confuse that with claims about intelligence.

Absolutely.

I don’t think you were trying to, but you’re just saying you wanted to get to the point. Well, just a minute. More importantly, even this simple brain-like system, we’re not able to - and then the exact what we’re not able to do thing is, I think, what I was trying to emphasize in my last answer. You really want to be careful about exactly what is the target of explanation. So when people say that we fail to reproduce a nematode, it’s not always clear exactly what it is that we’re precisely failing to do.

Presumably, it’s creating—

So if it’s behavioral, then I think we’re actually fairly successful in the sense that we can build systems that imitate or generate many of the kinds of behaviors that nematodes do.

But unrelated to how the nematode does it, right?

You could do it either way. Doing something like using a neural network kind of thing or using a non-neural network kind of thing - essentially, the behaviors are typically simple enough that you can do them either way. Now, whether we can build something that’s as robust as a nematode, that might not be the case, but that might be more of a materials problem.

If anyone was going to ask me, “What are the biggest restrictions for AI right now?” Especially in applied AI and robotics and so on, when you want to interact with an environment - I think it often comes down to things like building really sophisticated bodies. Nematodes, they materially outdo us as far as very microscopic kinds of things with all kinds of inputs and outputs and just the kinds of things that we can’t really build yet on a physical device side. I think we can wonder sometimes about exactly what it is that we’re worried about in the failure of the nematode project to the extent that it does fail, I guess.

I guess the idea is that if I had found another worm that was successful, and it had three neurons, and it exhibited complex behavior, and it could do things, and then we say let’s model three neurons in a computer just to show we know how it works. And then you say, “Well, we can’t do that either. We can build systems that can duplicate that behavior, but we still don’t know how these three neurons interact to produce that behavior.”

Yeah, the other point that I want to make is that it’s not clear to me that the neurons matter, really, in the same way that they matter for a large-scale brain.

Interesting.

By that, I mean you get very sophisticated behaviors out of bacteria and all kinds of things, as well, where they have no neurons. As soon as you’re dealing with a physical system or nonlinear dynamical system, which is exactly what you’re dealing with in all these instances, it gets very difficult to explain anything, regardless of how many neurons it does or doesn’t have. And also depending on what you count as a good explanation or not a good explanation, so, I think if we move to this definition of intelligence I was describing before, where we’re talking about sets of skills, ability to solve problems, dealing with abstract representations and so on, then we can build systems which, and we do understand, to some extent, how mammalian brains seem to deal with those kinds of problems.

Way back in the day, when atoms were first theorized, they said these are the basic units of the universe. And then all of the sudden, people bust those open, and all kinds of stuff comes swimming out of those. It seems like every time we look at something, we can bust it down, and there’s all kinds of other stuff going on. And there are those who claim, who believe that a neuron, a human brain neuron, is actually as complicated as a supercomputer. That what’s happening is happening down at the Planck scale. It is an inexplicably complicated mechanism that in its singleness is unutterably complex, but take a hundred billion of them and put them together, and it’s maybe beyond us. What do you think of that?

Yeah, I think we’re running into the same question about targets of explanation and levels of description. I think we can look at a single neuron as a nonlinear dynamical system, and we can say it’s incredibly complicated because it is. Like any cell, right? Any single cell that you look at is incredibly complicated. But this doesn’t mean that our explanations that we find satisfactory for particular targets like intelligence need to refer to those low-level properties. This is sort of a thing that typically comes up in philosophy of science, where people talk about: “If I make certain kinds of abstractions, can I still generate good explanations?” And I think through the history of science, we generally find out that the answer is yes. We don’t have to talk about every molecule in a gas in order to have the gas law.

We can find a level of abstraction which gives us all kinds of useful explanations for particular targets which don’t necessarily refer to the smallest part. So I would say the same thing is probably true of the brain, and there’s always going to be disagreements about what the right level of analysis is. I think in the case of the brain; it depends on your question. If we’re asking high-level questions about intelligence and large-scale systems and so on, then maybe the things that matter most are the fact that we have a neuron that has a threshold and maybe some first-order dynamics. With that, we can actually generate explanations which we find quite satisfactory as being able to help us understand how brains learn, how they can represent abstract structure, how they can deal with language, how they can do spatial reasoning, how they can control a complicated motor system. So if we can answer all of those questions in a way that we find satisfactory and we can reproduce in our devices, then we might think, yeah, we have a really good understanding of the brain. Doesn’t mean that we understand how the same thing works when you make each one of my simple abstractions as complicated as the real thing. But I think this is a kind of problem that shows up consistently throughout science. For neuroscience to run into it, it shouldn’t really be a surprise in some ways.

Let’s move from the brain to the mind. I will define my term here. The mind is a concept we use all the time, like “I’m of two minds about that.” Dr. McCoy was always saying, “Are you out of your Vulcan mind?” If you use the word brain, like, “I’m of two brains about this,” all of a sudden, it doesn’t feel quite the same, and so I think a definition that everybody might find tolerable is: It’s the set of functions, a set of abilities of the brain that seem beyond what an organ should be able to do. You don’t expect your liver to have a sense of humor or to be indignant or to be creative, but your brain is. And whatever those aspects are that, again, just seem like, “Hmm, should it be able to do that?” I call that the mind. What do you think that is? Would you say the mind is just stuff about the brain that’s really cool and so we call it the mind?

Yeah, I think basically I’m happy to say that the mind is a product of the brain, I guess. I’m also happy with the identity myself. Of course, linguistically, it does sound strange to say, “I’m of two brains.” But sort of technically speaking, from a scientific perspective, I think that all of the things we attribute to minds are things that we would explain as being the consequence of brain function. I’m happy to think of the mind as maybe a fairly high-level abstract way of describing brain functions, and I think it’s really interesting to try to figure out exactly how various things that we attribute to the mind can come out of brain function.

But certainly the mind, I assume, you believe is somehow emergent in the sense that none of your neurons individually have a sense of humor, but collectively you have a sense of humor, so that’s an emergent property of the brain, isn’t it?

Sure, yeah.

Okay, and so there are two kinds of emergence. There’s the one that you can understand, you can reconstruct it backwards and see, “Oh, that’s how it happens.” And then there’s strong emergence, which says, “No, you actually can’t put these pieces together and explain why they produce that.” It’s inexplicable with all known laws of physics. So do you think the mind - your sense of humor, we’ll just use that. Is your sense of humor a weak emergent phenomenon or do you even think strong emergence exists?

Yeah, I would be a weak emergentist, so I don’t think it’s the case that there are features of minds that we wouldn’t be able to explain by referring to brain function, essentially. I think we might have to maybe extend our - again, so this is maybe going in a different direction than you’re suggesting, but there’s all kinds of interesting, complicated behaviors that people exhibit, which are things that we probably want to talk about social interactions or things that are outside of the body in some ways in order to give a good explanation. But I think the ultimate contribution of the individual to those explanations is going to be through the brain.

All right, fair enough. Before we get onto your work, which I want to save plenty of time for, because that’s like a whole episode in and of itself; I would be remiss if I didn’t talk about consciousness. So to set that one up, consciousness is something we all agree, again, on the definition. It’s subjective experience. It’s the difference between feeling heat and being able to measure temperature. It’s the taste of pineapple, all of that, all that stuff. It’s qualia. And yet it doesn’t seem like it’s a question we know how to pose scientifically—how it is that organic or inorganic matter can experience the universe. It doesn’t seem like a question we know how to pose scientifically nor answer scientifically. So do you agree with that, and what’s kind of your shtick on consciousness?

Yeah, my shtick on consciousness is usually that I don’t talk about it. It’s largely because I agree with philosophers’ view often identified as a fairly confused subject matter. I don’t know if you’re familiar with Dan Dennett’s typical take on these kinds of things?

Absolutely.

Of course, yeah. I tend to be—

He says it doesn’t really exist. It’s just brain function, and there’s nothing particularly mysterious going on. There isn’t really a hard problem, never existed.

Yeah, I mean I guess I don’t think that he doesn’t think consciousness exists, because that would obviously be a preposterous claim. I mean we all realize we have it…

You’re right. He says there’s nothing outside of normal brain function that requires an additional explanation beyond simple brain function.

Yeah, he’s challenging the idea that when you identify this thing that we came up with a word for, qualia, that that thereby demands some special explanation or it’s really a thing that you can actually ask intelligible questions about and so on, which is, I think, the same way as what you said. That you’re not really asking for more than what you’re going to get when you get a complete explanation of the physical part of the brain function.

Yeah, let me ask another different version of the question, then, which is: we understand how you could hook up a thermometer to a computer, and then you could have an MP3 play “Ahh!” when the thermometer gets 500 degrees. We understand the computer could sense temperature and give a response to that. In your view, can a computer feel pain?

So I don’t think it would probably feel pain the same way people do, because it would have different kinds of experience, but I also think that we’re not in a position to imagine the complexity of a system that it might take in order to really respond to these kinds of stimuli as pain, the way that we respond to the same stimuli. Basically what I’m challenging is the idea that our incredulity when we imagine hooking up a thermometer to a computer and it sort of yelling “Ah!” - that that’s actually pain, is really just our inability to understand exactly how to properly hook up a system so that it is actually responding to that as pain.

But I can imagine an alien made of ice crystals, that lives on some ice planet, and when the sun is too hot, it’s “Ahh!” And I can imagine this alien hurting as its skin is melting off of it. So it’s not a lack of imagination that something very different than an animal as we know it can feel pain. I think the question is how can a computer go from sensing something to feeling something. What is the juice? What’s the missing element in that?

I mean, it goes the same way that inorganic chemistry and organic chemistry goes from things that don’t sense things to things that do. So I don’t think there’s any more of a mystery imagining silicon - I mean you just imagined an ice creature that screams in pain, so I don’t understand why it’s harder to imagine a silicon creature that has the same experience.

Right. I mean I guess unless you think that there’s something inherent in life that is emergent in a way that a machine cannot? I don’t know.

Yeah, so you know there are people like John Searle who come to those conclusions.

Of course.

But I would definitely disagree with that, yeah.

Where do you think he - he’s so great. He’s like, “The calculator’s job is not to be conscious! It’s not his job to be intelligent!” Where do you part company with him?

Where do I part company with him, yeah. So I basically part company with his final conclusion. He basically finds himself thrust into a corner where the only thing that he has left when he’s trying to explain where understanding comes from is basic biology. He denies all of the responses to his Chinese room to the point where there’s nothing left except for him to say, “Well, biology can fundamentally understand high-level complex abstract concepts, but nonbiological systems can’t.” And to me, that kind of mystery just should make you realize that you’ve just sort of disproven your original starting thesis. Now the question really comes down to, I think, which of the responses to his Chinese room thought experiment is really the best one. So I have a favorite, which is -

Tell me your favorite, go ahead.

It’s something like the system’s response probably coupled with a little bit of learning, where you can actually allow the system to rewrite its own internal manual, it can do that based on all the kinds of characters and things that it’s interacting with in the world. So basically if you make the computer as sophisticated as what we think minds really are or animals really are, then I don’t understand why you wouldn’t attribute the same kind of understanding to the computer system as you do to animals.

So a librarian’s like changing the books, and as the librarian changes the books, it’s learning and it’s growing and it’s fancying it’s alive, and eventually it could be conscious. Kind of?

I mean these things also depend on speed, so whenever you map it onto… So you’re thinking more of his Chinese nation experiment. I think, again, you begin to push yourself outside of the kinds of functional properties that matter for things that we would like to call intelligent. Basically libraries with librarians inside just can’t react fast enough. They can’t solve the same kinds of problems that we expect intelligent systems to solve and so on.

All right, Chris. Well, you have been so patient taking me all the way from the nematode worm to consciousness. Tell me about neuromorphic computing. Start with: What is it?

Neuromorphic computing is a kind of computation that is basically trying to figure out what the fundamental principles of neurocomputation are and reproduce those in silicon. The idea here, as you have sort of demonstrated, is, I spend a lot of time thinking about high-level concepts like intelligence and figuring out how brains could give rise to those sorts of things. In the process of constructing lots of models and running them on computers, because this is how we like to build brain models in my lab, it becomes pretty obvious pretty quickly that the kinds of computational tools we’re working with are really not designed to run the kinds of algorithms that brains run natively. There is a whole area of engineering, essentially, which is neuromorphic engineering, where people have really been pushing the envelope and trying to understand new forms of computation which aren’t necessarily von Neumann, they’re not necessarily digital, and they’ve been trying to push it in directions that are inspired by fundamental properties of neurocomputation. We’ve been working with several groups that do this kind of thing, and I think it’s super exciting to begin to think about not only the ability to run much bigger models, more sophisticated models of neurofunction at much larger scales, but in fact there’s all kinds of interesting industrial applications as well.

Such as? Give me some specifics, some use cases, some successes, some failures, some challenges. Where are we at with the science?

Sure, yeah. I’m just trying to think about where to begin exactly. You know, my lab built this really big brain model called Spaun, and after we had done that, we were approached by people who were interested in this kind of thing and wondering, “Does it have commercial applications?” Ultimately, myself and the entire team that was on Spaun decided that it did, and so we started this company called Applied Brain Research, where, of course, we’re doing what the company title says, and that is we’re applying the brain research that we had done. It’s been interesting because I think it’s actually pushed the research in interesting directions in the lab, trying to make these computational ideas actually practically applicable.

You have to really run something in the world, solve a real problem, deal with all of the complexities of the world which aren’t necessarily in your pristine simulation. And the place that this comes really clearly evident is in robotics. We’ve been looking at a bunch of robotics applications showing how you can learn things that you don’t know about your own motor system on the fly in order to actually improve your performance in ways that are highly inspired by the brain. This all comes back to the neuromorphic computing because, again, to do this, the kinds of computers we have - you can do it with them, but they’re using huge amounts of energy to do this kind of thing, whereas if you have a neuromorphic computer, you can do the same thing with tens to hundreds of times less power.

Again, if you want to build a really big, complicated robot, then you need to use hundreds or thousands of times less power than what a current computer would do. When we sat down and thought, “Well, let’s say we build something at the scale of a human brain. It’s got a hundred billion neurons on it. How much GPU power do we need to simulate a hundred thousand neurons? Let’s multiply that by big numbers,” and we find out that we need power plants’ worth of energy, gigawatts of power to be running a human-sized brain. That’s obviously just not going to be feasible if we really want to build one of these kinds of systems. Neuromorphic computers is a way of just getting past that really quickly and saying we can build these really big, sophisticated systems and actually run them on reasonable power budgets. Which, again, that’s not the kind of thing that really occurred to me when I was looking at brain models, but brains are incredibly power efficient. They run on 20 watts, and they do far more sophisticated things than our computers running on hundreds or thousands of watts. I think it’s really interesting to figure out what it is about the way brains are designed that we can port over to devices and build devices that can do all kinds of interesting AI kinds of things. Not only controlling robots, but doing speech recognition, vision, and all the other sorts of things that AI is known for, or neural networks, I guess, in particular.

We’re a hundred watts. Like you said, 20 for the brain and 80 for the body. Presumably, would you guess that that’s a lower limit of - like you’re not going to be able to beat that by an order of magnitude or maybe evolution would’ve figured out a way to do it already? Or—

Yeah, definitely. If we could get anywhere near the brain, we’d be happy.

Sure.

I think we’re still maybe three orders of magnitude away from a brain, but that’s still better than being six or nine orders of magnitude, which I think current computers are.

Yeah, and think about it. You’d just be able to open a door and throw a Big Mac in, and that thing would run for three more hours. That’s all it needs.

Well, we might plug it in. Take batteries.

I don’t know. Keep going with that. The biggest supercomputer in the world uses 20 million watts-ish, and we use twenty.

Right.

I guess because we’re so parallel, we’re so concentrated, we’re so low-power, all of the electrical impulses individually are very tiny. What’s our trick?

That’s exactly the kind of thing people have been wondering about. What aspects of the structure of the human brain are really the ones that buy us this kind of improvement in performance and efficiency? While doing that particular set of algorithms, right? I think that’s always one thing you want to keep in mind is that when you build these neuromorphic computers, they’re really good for some algorithms and not for others. So they’re not going to be great for super high-speed arithmetic or very large database storage and lookup, but they will be good for things like recognizing images, controlling arms and bodies, learning on the fly. All the kinds of things that we want our AIs to do.

But getting back to your question - What are those sort of fundamental things that are important to the brain that we capture with neuromorphics? I tend to think that there are three, at least. One is massive parallelism, which you mentioned, and that is that we often solve problems not by running with really high clock speeds and doing things super-fast, but doing them much, much slower, which can actually take way less energy, but doing them massively parallel. A lot of these chips, they don’t have one core or eight cores or ten cores. They have thousands of cores on one chip, and so they’re all running at exactly the same time. That’s number one. Number two is when you try to coordinate thousands or millions or billions of cores at the same time, it’s hard to do, and right now, most computers rely on a global clock. You have one clock that’s basically synchronizing everything, saying, “Okay, everybody say something now. Everybody do some processing. Everybody say something now. Everybody do some processing.” This kind of synchronous communication, again, if you have many, many cores spread out over a wide area and so on, you don’t want to have to try to synchronize them because you can spend huge amounts of energy trying to clock everything.

Instead, the kind of communication that you tend to find on these neuromorphic platforms is asynchronous communication, where you have lots of these cores essentially running independently, and when they have something important to say, they’ll say it and send out that information and then allow the other cores to accept information at any point in time and integrate it into their computing. Again, this kind of “say things when you need to and not otherwise” is a source of power savings and, again, that sort of replicating what the brain is doing.

Then, I guess the last one would be that to have efficient computation, you only want to send information when you need to. Not only can you send it asynchronously, but you don’t need to constantly send it all the time. I guess the way to think about this is if you think about standard neural networks, you have an artificial neuron, and usually it outputs a number between zero and one, or something like that, and it puts that number out all the time. Every single time stamp, it puts out that number until its input changes and it puts out a different number.

If you look into biological systems, of course you have neurons, and what they do is they either spike or they don’t for any particular point in time. They’ll emit a brief one, or they’ll say nothing. People think of this as ones and zeros a lot, which is a reasonable way to think about it, and if you do, then what you’re noticing is that you basically have this massive sparsity over time. Real biological systems are dealing with changing environments, and so activity is changing all the time, the problem that needs to be solved is changing all the time, and these neurons are only saying something when they actually have something interesting to say. Otherwise, they’re silent. This hugely reduces the amount of traffic in the network. Instead of everybody yelling all the time, “I’m firing at ten hertz! I’m firing at ten hertz! I’m firing at ten hertz!” - they’ll just, every ten times a second, they’ll say something, emit something, and then that way of transmitting information greatly decreases the amount of energy that you need and the amount of information traffic that you have on your network. Again, that can be huge power savings and be a much more efficient way to compute, especially when you’re doing it over time, when you really want to do these dynamic computations, things like controlling an arm or recognizing video or processing speech or what have you.

It’s like there’s some part of your brain that’s looking for danger, and you’re in the coffee shop, and a bear comes in. And it just sits there quietly, and then it sees a bear, and it’s like, “Bear!”

Except they generally don’t - it doesn’t have to be that interesting. That’s a good way to describe it, but some neurons - there’s a light in that location in my receptive field, and it’s now brighter. Doesn’t have to be a bear.

This is a real quick question. If you were a betting man, are you in your field going to get to the 20-watt brain before the roboticists get to the 80-watt equivalent to the human body? Which of those is a harder problem?

Oh, jeez. I don’t think either of us are going to get there anytime soon. I think we would both be happy with a factor of a hundred, so if we could get to the 2,000-watt brain and the, whatever, 8,000-watt body, we’d probably be pretty happy.

Do you have opinions about how transferred learning works in people? That’s something we do really well, and we can so effortlessly take domain knowledge in one area and use it in another that we’ve never seen before. We’re very kind of fluid in our intelligence. Do you have theories on that or ideas?

Yeah, actually. One of the very first computational models I ever built was a model of analogy. People often think of things like transferred learning of the kind you’re describing as being fundamentally based on the ability to do analogies, so find structural similarities independent of content, and that’s a way of doing reasoning across domains and is a way of explaining certain kinds of creativity and all kinds of interesting features of human intelligence.

Do you think that a general intelligence, an artificial general intelligence - do you think that that sort of intelligence is a pretty straightforward thing we just haven’t quite figured out, but there’s going to be a breakthrough that is like an “aha” moment? Or is it like a Marvin Minsky type of just a hack of 100 or 1000 or 10,000 individual skills that all come together, and it’s a long, hard slog to an AGI?

I guess if I had to pick one of those two options—which are a little bit on the extremes, obviously, that’s why it’s an interesting choice—I’d probably go more the Minsky route. This is kind of in line with my claim before that I don’t think that there’s anything fundamentally theoretical that we’re missing in our understanding of how brains work and hence our quest to reconstruct or simulate them. I think a lot of what we need to do is exactly this kind of engineering, the engineering of figuring out how all the different parts work, getting them to coordinate and making them be able to learn to work together in a way that is robust and all that kind of thing. There’s definitely a lot of interesting challenges there, integration challenges and so on. But we probably will discover some things along the way, about exactly what it takes to coordinate such a big, complicated system or maybe new kinds of architectures that are critical in order to learn the structure of different kinds of domains of knowledge and so on that we really don’t know about yet. But I don’t consider those fundamental theoretical changes.

You fully accept that there is a thing called artificial general intelligence, that there will be a moment that you can look at and say, “Aha, that’s it.” Assuming you think that’s a valid concept, the range in which people think we’re going to get it, in my experience, is between five and five hundred years. I won’t ask you to pick one of the two extremes. Anywhere in the middle, where would you be?

Again, I think it would be easier for me to answer that question if I knew what criteria we were using. As is often the case, it can be a moving goal post. We’ll definitely construct more sophisticated things than we’ve seen now in all of the next coming five to five hundred years. Will we reach something where we could look back and safely say, “Oh yeah, people from the 20th century would’ve said ‘This is a really intelligent machine.’”

Let’s assume for a minute I’m a machine. I emote, I’m expressing creativity, I try to make the occasional humorous comment with varying degrees of success. I try to make my questions follow the prior ones and understand the nuances of what you’re saying. All of that. Assuming I’m a computer, when would we have a me, to the nearest decade or century?

Maybe you’re too hard. I don’t really know. I prefer to think of it like Commander Data from Enterprise. You know this guy?

Yeah.

I’ll assume everyone knows who Commander Data is.

I think so.

The nice thing about him as an example is that he passes almost all of those tests but not all of them exactly right. They had episodes about whether he was really a person and really deserved rights, and they had episodes about his inability to understand human emotions and episodes about him trying to get things about humor, which he seemed to have a hard time with and so on. I would consider us having achieved AGI if we had something like Commander Data.

Okay, that’s the 23rd century.

Yeah, exactly.

What would you say? Is that early or late?

To get that? My suspicion is that’s going to happen sooner than most people expect. I put it in the hundred-year range.

I have three final questions. We’re running out of time here. My first one is: There’s more than a few people who are alarmist about that, and think that such an intelligence would be able to improve itself in digital time, and it eventually would be so much ahead of us, we can’t understand it and it wouldn’t even hardly notice us, or maybe it doesn’t like us or what have you. This whole narrative around a superintelligence and why you might need to fear it—do you give that any credence?

Absolutely, yeah. I definitely think that you could probably go out of your way to build something that was really dangerous. When people worry about AI, I like to think about worrying about nuclear war and nuclear technology in the same kind of way. There are some fantastic, positive applications of this technology, and there are some very scary, negative applications of the same technology in a lot of ways.

What I tend to suggest is that we need to take the threat seriously, but we also needn’t think that it’s an unavoidable threat. Just as many people thought nuclear war was a completely unavoidable threat, time hasn’t ended, so we don’t know if we’ve definitely avoided it, but we seem to be doing pretty well so far. I think we want to treat AI with the same kind of care and sensitivity. It’s definitely not something we should dismiss, but it’s also not something which is inevitable. I think it’s the kind of thing where we can make explicit decisions to try to…

I guess I’m asking specifically about a superintelligence, the idea that Commander Data then is able to improve himself at a rate that is so fast that it evolves beyond us and we become insects to it? That isn’t something that somebody would have to - that would just be the natural order of things happening.

I guess that’s the part I would disagree with.

Okay.

Unlike for biological systems, digital systems that we create don’t have a natural order in a way that we have no say in. The mere fact that this is something we’re building makes it seem like it’s also the kind of thing that we can put constraints on, we can give it the kinds of goals that are reasonable or not, so it probably wouldn’t be a good idea to give an AI the goal of surviving at every cost regardless of anything else. A lot of the movies that you see are predicated on these kinds of goals being essential and also access to resources of these systems being enormous. Those are exactly the kinds of places where we could make decisions about what are reasonable things to design into our systems with goals or what are not reasonable things. What things should we outlaw and what things should we try to support and so on and so forth, and so I do think that unlike with biological systems, we’re in a position to try to make decisions about exactly what we do and don’t allow.

In the science fiction world—I assume you consume it, you mentioned Data—is there any view of the future, be it movie or TV or book or any of it, that you look at and think, “It could happen that way”—Ex Machina, Her, Matrix, any of it?

I would give different answers to different examples. I think in general, the kind of cases that appeal to me more are the ones where we’re working closely with AIs, where they are tools—subordinates, typically, to humans, so, again, things like Star Trek, I guess, are examples of that. And Star Wars, too, I guess to some extent, where it’s not the case that they’ve been let out of control or they can take over or do whatever they want, but they’re definitely very sophisticated and they have all kinds of powers to do lots of the things that our machines can’t do right now and so on and so forth. But generally, thinking of it as another step in the technological development of the human society, one which we need to be careful doesn’t eliminate us just like certain kinds of genetic technologies and nuclear technologies and so on, but ones that we can successfully deal with and have a good future instead of the ones that are often imagined.

Final question: If people want to keep up with you and your work and your thinking and your writing and all of this stuff, what are the various options?

My lab has a webpage at http://www.compneuro.uwaterloo.ca/ and my company has a webpage at https://appliedbrainresearch.com/. Those are probably the two easiest ways to see what’s going on inside the lab and the company, respectively.

All right, we will make those links in the transcript, and people can follow you there. Well, Chris, I want to thank you so much. This is obviously a topic I’m incredibly interested in, and I thank you for your patience with a whole lot of odd questions, I’m sure.

It’s been a lot of fun. Thank you very much.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.