In this episode, Byron and Stephen discuss computational intelligence and what's happening in the brain.
Stephen Wolfram is a British-American computer scientist, physicist, and businessman. He is known for his work in computer science, mathematics, and in theoretical physics. He is the author of the book A New Kind of Science.
Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. Today my guest is Stephen Wolfram. Few people can be said to literally need no introduction, but he is one of them. Anyway, as a refresher, Stephen Wolfram exploded into the world as a child prodigy who made all kinds of contributions to physics. He worked with Feynman. But, unlike many prodigies, he didn't peak at 18, or 28, or 38, or 48. In fact, he probably hasn’t peaked at all right now. He went on to create Mathematica, which still ripples through the technology world. He wrote his magnum opus, a book called ‘A New Kind of Science.’ And he created Wolfram Alpha, an answer engine that grows better and better every day. Welcome to the show, Stephen.
Stephen Wolfram: Thanks.
I usually start off by asking, what is artificial intelligence? But I want to ask you a different question. What is intelligence?
It's a complicated and slippery concept. It's useful to start, maybe, in thinking about what may be an easier concept, what is life? You might think that was an easy thing to define. Here on Earth, you can pretty much tell whether something is alive or not. You dig down, you look in a microscope, you figure out does it have RNA, does it have cell membrane? Does it have all those kinds of things that are characteristic of life as we know it on Earth? The question is, what abstractly is something like life? And, we just don't really know. I remember when I was a kid, there were these spacecrafts sent to Mars, and they would dig up soil, and they had this definitive test for life at some point which was, you feed it sugar and you see whether it metabolizes it. I doubt that in an abstract sense that's a good, fundamental definition of life. In the case of life on Earth, we kind of have a definition because it's connected by this, sort of, thread of history. All life is, kind of, connected by a thread of history. It's sort of the same thing with intelligence. If you ask, what is the fundamental essence of intelligence? Well, in the case of the intelligence that we know with humans and so on, it's all connected by a thread of history. If we ask, what is intelligence abstractly? That's a much harder question, and it's one I've thought about for a long time. What's necessary to say that something is intelligent is for it to be capable of some level of sophisticated computation. If all the thing does is to, kind of, add two numbers together, and that's the only thing it can do, we’re not going to likely consider it intelligent.
But your theory is that hurricanes are computational and icicles, and DNA.
And so they’re all intelligent?
As people often say, the weather has a mind of its own. The question is, can we distinguish the kind of intelligence, the kind of mind, in effect, that is associated with the computations that go on in fluid mechanics, from the kind of intelligence that we have in our brains. I think the answer is, ultimately, there really isn't a bright line distinction between those. The only thing that is special about the intelligence that we have, is that it's connected to our kind of thread of history and our kind of biological evolution, and the evolution of our civilization and things like this. I don't think we can say that there's something we can't distinguish at some sort of scientific level. What's that essential feature that means that brain is intelligent, and the weather doesn't really have a mind, so to speak. I think the thing that's interesting about modern computation in AI is that we're seeing our first examples of some kind of alien intelligence. We're seeing our first examples of things that clearly have attributes very reminiscent of human-like, what we have traditionally called intelligent behavior. But yet, they don't work in anything like the same way and we can argue back and forth forever about is this really intelligence or not. And I think it becomes just a question of what do we choose to make the word mean.
In my life I've been involved in a lot of, kind of, making computers do things that before only humans could do. And people had often said, “Oh, well, when computers can do this or that thing, then we'll know they're intelligent.” And one could go through the list of some of those things whether it's doing mathematical computation or doing image recognition, or doing whatever. Every time when computers actually managed to do these things the typical response is, “Oh, well, that isn't really intelligence because…” Well, because what? Usually, the real reason people think it isn't really intelligence is because somehow you can look inside and see how it works. Now, of course, to some extent, you can do that with brains too. But I think one of the things that's sort of new in recent times, is something that I've long been expecting, anticipating, working on actually, which is the appearance of computation that is doing things that are really interesting to humans but where we as humans can't really understand what's going on inside. In other words, the typical model of computation has been, you want to build a program for a particular task, you the human engineer, put the pieces together in a kind of very intentional way where you know, when I put this piece, and this piece, and this piece together then it's going to do this, and that's what I wanted it to do. Well, for example, I've been interested for a really long time in, what I call, mining the computational universe of possible programs. Just studying simple programs, for example, then going and searching trillions of them to find ones that behave in particular ways that turn out to be useful for some purpose that we have.
Well, the thing that's happened in modern times with deep learning and neural networks, and so on, is it's become possible to do that same kind of program discovery in a slightly different way than I've done it, because it turns out that one can use actually the ideas of calculus to make incremental progress in finding programs that do the things one wants them to do. But the basic idea is the same, that is, you are, by some criterion, you’re finding from this, sort of, computational universe of possible programs, you’re finding programs that serve some purpose that’s useful to us. Whether that purpose is identifying elephants from tea cups, or whether that purpose is translating between human languages or whatever else. And, the thing that is interesting and maybe a little bit shocking right now, is the extent to which when you take one of these programs that have been found by, essentially search, in this space of possible programs, and you look at it, and you say, “How does it work?” And you realize you really don't know how it works. Each individual piece you can identify what it's doing, you can break it down, look at the atoms of the program and see how they work. But when you say, “What's the big picture? What's it really doing? What's the ultimate story here?” The answer is we just don't know.
You mean like move 37 in AlphaGo? This move that even the greatest player in the world was like, “What?”
I haven't followed that particular system. But I tried to program a computer to play Go in 1973 and discovered it was hard.
But to back up a minute, wouldn't you say Google passed that point a long time ago? If you say, “Why did this page rank number two and not number three?” Even Google would look at it and go, “I don’t know. Who knows?” It's an alchemy of so many different things.
I don't know, I haven't seen the source code of the Google search engine. I know in my own search engines, search systems are kind of messy. Hundreds of signals go in and they're ranked in some way or another. I think in that particular case, the backtrace of, “OK, it was these signals that were important in this thing.” I think, to some extent, it's a little simpler, but it’s the same. That's a case where it tends to be more of a, I think, one-shot kind of thing. That is, you evaluate the values of these signals and then you say, “OK, let's feed them into some function that mushes together the signals and decides what the ranking should be.” I think what tends to be a more shocking, more interesting, it hasn't really happened completely with the current generation of deep learning neural nets, although it's beginning to happen. It has happened very much so with the kind of programs that I studied a lot, like cellular automata, and a bunch of the kinds of programs that we've discovered, sort of, out in the computational universe that we use to make Wolfram Alpha work, and to make lots of other algorithms that we build work. In those kinds of programs what happens is, it's not just, a one-shot thing where it's just this one messy function that’s applied to some data to get a result. It's a sequence of, actually not very messy, steps. Often, a sequence of simple, identical steps, but together, you apply it 10,000 times, and it's really unclear what's going on.
I want to back up if I can just a minute, because my original question was, what is intelligence? And you said it's computation. You're well known for believing everything is computation -- time and space, and the hurricane, and the icicle, and DNA and all of that. If you really are saying everything is intelligence, isn't that like, to beg the question, like you're saying, “Well, everything's intelligence.” What is it? I mean, for instance, the hurricane has no purpose. You could say intelligence is a purposeful action with the goal in mind.
Purposeful action? You're then going to slide down another slippery slope. When you try and start defining purpose, for us as humans, we say, “Well, we're doing this because…” and there’ll be some story that eventually involves our own history, or the history of our civilization, or our cultural beliefs, or whatever else and it ends up being really specific. If you say, “Why is the earth going around in its orbit?” Does it have a purpose? I don't know. You could say, “Well, it’s going around in this particular orbit because that minimizes the action,” in a, sort of, technical term in mechanics, associated with this mechanical system. Or, you could say it's going around in its orbit because it's following these equations. Or, it's going around in its orbit because the solar system was formed in this way and it started going around in its orbit. I don’t think that when we talk, with the human explanations of purpose, they quickly devolve into a discussion of things that are pretty specific to human history, and so on. If you say, “Why did the pulsar magnetosphere produce this blip?” Well, the answer is there’ll be a story behind it. It produced that blip because there was this imperfection, and there was a space-time position, something in the crust of the pulsar, a neutron star, and that had this consequence, and that had this consequence, and so on. There’s a story.
Well, you’re convoluting words. ‘Because’ is intentional, but ‘how’ is not. So, the question you're asking is, “How did that happen?” And that is bereft of purpose and therefore bereft of intelligence. But, to your point, if computation is intelligence, then, by definition, there's no such thing as non-intelligence. And I'm sure you've looked at something and said, “That's not very intelligent.”
No, no, no. There's a definite threshold. If you look at a system and all it does is stay constant over time. You started in some state and just stays that way. Nothing exciting is going on there. There are plenty of systems where, for example, it will just repeat. What it does is just repeat predictably over and over again. Or, you know, it makes some elaborate nested pattern but it's a very predictable pattern. As you look at different kinds of systems there’s this definite threshold that gets passed and it’s related to this thing I call ‘principle of computational equivalence’, which is basically the statement that, beyond some very low level of structural complexity of the system, the system will typically be capable of a certain level of sophisticated computation, and all systems are capable of that same level of sophisticated computation. One facet of that is the idea of universal computation, that everything can be emulated by a Turing machine, and can emulate a Turing machine. But that's a little bit more to this principle of computational equivalence than the specific feature of universal computation but, basically, the idea is it could have been the case.
If we'd been having this conversation over 100 years ago, people had mechanical calculators at that time. They had ones that did one operation, did another kind of operation. We might be having a discussion along the lines of, “Oh, look at all these different kinds of computers that exist. They'll always be different kinds of computers that one needs.” Turns out that's not true. It turns out all one needs is this one kind of thing that’s a universal computer, and that one kind of computer covers all possible forms of computation. And so then the question is if you look at other kinds of systems do they do computation at the same level as things like universal computers, or are there many different levels, many different, incoherent kinds of computation that get done. And the thing that has emerged from, both general discoveries that have been made and specifically a lot of stuff I've done, is that, no, anything that we seriously imagine could be made in our universe, seems to have this one kind of computation, this one level of computation that it can do. There are things that are below that level of computation and whose behavior is readily predictable, for example, by a thing like a brain that is this kind of uniform sophisticated level of computation. But once you reach that sophisticated level of computation, everything is kind of equal. And, in fact, if that wasn't the case, then we would expect that, if for example there was a whole spectrum of different levels of computation, then the top computer, so to speak, we could expect will be able to say, “Oh, you lower, lesser computers, you're wasting your time. You don't need to go through and do all those computations. I, the top computer, can immediately tell you what's going to happen. The end result of everything you're doing is going to be this.” It could be that things work that way but it isn't, in fact, the case. Instead, what seems to be the case, is that there's this one, kind of, uniform level of computation and the sense it’s that uniformity of level of computation that has a lot of consequences that we're very familiar with. For example, it could be the case if nature mostly consisted of things whose level of computational sophistication was lower than the computational sophistication of our brains, we would readily be able to work out what was going to happen in the natural world, sort of, all of the time. And when we look at some complicated weather pattern or something, we would immediately say, “Oh, no, we're a smarter computer, we could just figure out what’s going to happen here. We don't need to let the computation of the weather take its course.” What I think happens is this, sort of, equality of computation that leads to a lot of things that we know are true. For example, that's why it seems to us that the weather has a mind of its own. The weather almost seems to be acting with free will. We can't predict it. If the system is readily predictable by us, then it will not seem to be, kind of, free in its will. It will not seem to be free in the actions it takes. It will seem to be just something that is following some definite rules, like a 1950s sci-fi robot type thing.
This whole area of, what is purpose, how do we identify what purpose is, I think, in the end, this is a very critical thing to discuss in terms of the fundamentals of AI. One of the things people ask is, “OK, we’ve got AI, we've got increasing automation of lots of kinds of things. Where will that end?” And I think one of the key places that it will end is, purpose is not something that is available to be automated, so to speak. It doesn't make sense to think about automating purpose. It’s for the same reason that it doesn’t make sense - the same reason I’m saying, it’s this question that you can't distinguish these different things and say, “That’s a purpose. That's not a purpose,” is the same reason that purpose is this kind of thing that is, in some sense, tied to the bearer of that purpose, in this case humans, for example.
When I read your writings, or when I talk to you, you frequently say this thing that people keep thinking that there's something special about them, they keep coming up with things a machine can't do, they don't want to give the machine intelligence because… you come across as being really down on people. I would almost reverse it to say, surely there isn't some kind of an equivalence between a hurricane, an iPhone, and a human being. Or is there? And if there isn't, what is special about people?
What's special about people is all of our detailed history.
That's just different than other things. The iPhone has a detailed history and the hurricane. That isn’t special, that's just unique.
Well, what's the difference between special and unique? It’s kind of ironic because, as you know, I'm very much a person who's interested in people.
That’s what I'm curious about, like, why is that? Because you seem to take this perverse pride in saying, “Oh, people used to think computers can never do this and now they do it. And then they said it can never do this, and they do it.” I just kind of wonder, I try to reconcile that with the other part of you which is clearly a humanist. It's almost bifurcated like half of your brain, intellectually has constructed this model of moral equivalence between hurricanes and people, and then the other half of you kind of doesn't believe it.
You know, one of the things about doing science is, if you try to do it well, you kind of have to go where the science leads. I didn't come into this believing that that will be the conclusion. In fact, I didn't expect that to be the conclusion. I expected that I would be able to find some sort of magnificent bright line. In fact, I expected that these simple cellular automata I studied would be too simple for physics, too simple for brains, and so on. And it took me many years actually to come to terms with the idea that that wasn't true. It was a big surprise to me. Insofar as I might feel good about my efforts in science, it's that I actually have tried to follow what the science actually says, rather than what my personal prejudices might be. It is certainly true that personally, I find people interesting; I'm a big people enthusiast so to speak.
Now in fact, what I think is that the way that things work in terms of the nature of computational intelligence in AI is actually not anti-people in the end. In fact, in some sense it's more pro-people than you might think. Because what I'm really saying is that, because computational intelligence is sort of generic, it's not like we have the AI, which is a competitor. “There's not just going to be one intelligence around, there are going to be two.” No, that's not the way it is. There's an infinite number of these intelligences around. And so, in a sense, the non-human intelligence we can think of as almost a generic mirror that we imprint in some way with the particulars of our intelligence. In other words, what I’m saying is, eventually, we will be able to make the universe, through computation and so on, do our bidding more and more. So then the question is, “What is that bidding?” And, in a sense, what we're seeing here is more, if anything, is in some ways an amplification of the role of the human condition, rather than its diminution, so to speak. In other words, we can imprint human will on lots and lots of kinds of things. Is that human will somehow special? Well, it's certainly special to us. Is it the case if we're going into a competition, who's more purposeful than who? That degenerates into a meaningless question of definition which, as I say, I think to us, we will certainly seem to be the most purposeful because we're the only things where we can actually tell that whole story about purpose. In other words, I actually don't think it's an interesting question. It maybe was not intended this way, but my own personal trajectory in these things is I've tried to follow the science to where the science leads. I've also tried to some extent to follow the technology to where the technology leads. You know I'm a big enthusiast of personal analytics and storing all kinds of data about myself and about all kinds of things that I do, and so on. I certainly hope and expect one day to increasingly make the bot of myself, so to speak. My staff claims, maybe flattering me, that my attempt to make the SW email responder will be one of the last things that gets successfully turned into a purely automated system, but we will see.
But the question is, to what extent when one is looking at all this data about oneself and turning what one might think of as a purely human existence, so to speak, into something that’s full of gigabytes of data and so on -- is that a dehumanizing act? I don’t think so. One of the things one learns from that is that, in a sense, it makes the human more important rather than less. Because, there are all these little quirks of, “What was the precise way that I was typing keystrokes on this day as opposed to that day?” Well, it might be “who cares?” but when one actually has that data, there's a way in which one can understand more about those detailed human quirks and recognize more about those in a way that one couldn't, without that data, if one was just, sort of, acting like an ordinary human, so to speak.
So, presumably you want your email responder to live on after you. People will still be able to email you in a hundred years, or a thousand years and get a real Stephen Wolfram reply?
I know that you have this absolute lack of patience anytime somebody seems to talk about something that tries to look at these issues in any way other than just really scientifically.
I always think of myself as a very patient person, but I don't know, that may look different from the outside.
But, I will say, you do believe consciousness is a physical phenomenon, like, it exists. Correct?
What on Earth does that mean?
So, alright. Fair enough. See, that's what I mean exactly.
Let me ask you a question along the same line. Does computation exist?
What on Earth does the word ‘exist’ mean to you?
Is that what it is? It’s not the ‘consciousness’ you were objecting to, it's the word ‘exist’.
I guess I am; I'm wondering what you mean by the word ‘exist’.
OK, I will instead just rephrase the question. You could put a heat sensor on a computer and you could program it that if you hold a match to the heat sensor, the computer plays an audio file that screams. And yet we know people, if you burn your finger, it's something different. You experience it, you have a first-person experience of a burned finger, in a way the computer can only sense it, you feel it.
Wait. Why do you think the computer isn't having a first-person experience?
It’s not a person. I am kidding. If you believe the computer experiences pain, I would love to have that conversation.
Let's talk about the following situation. Let’s talk about a neural net. I mean, they're not that sophisticated yet and they’re not that kind of recurrent, they tend to just feed the data through the network. But, you know, we've got a neural net and it's being trained by experiences that it's having. Then the neural net has some terrible experience. It's terribly traumatic for the neural net. That trauma will have a consequence. If we were to look, sort of forensically, at how that had affected the weights in the neural net, we would find that there were all these weights that were affected in this or that way by the traumatic experience the neural net has had. In what sense do we then think--we then have to tease apart, what's the difference between the effect of that experience the neural net had, and the experience the brain has.
That’s even more insidious than putting somehow people, and hurricanes, and iPhones in kind of the same level. That’s even worse because, in a way what you're saying is, I had this car, and I'm lost in the woods and the car’s overheating, and the engine is out of oil, and the tires are torn up, and I’m tearing that car up. But, I'm being pursued or something, and I have to get out of the woods. I essentially just destroy this car making my way out. If your assumption is, “Well, that car experienced something” you know, you were afraid of getting eaten by lions but you killed the car in doing it. And to somehow put those two things on the same level, you can't really make that choice.
Well, the morality of AI is a complicated matter. For example, if you consider…
I'm just asking about the basis of human rights. The basis of human rights are that humans feel pain. The reason we have laws against harming animals is because animals feel pain. What you're suggesting is infinite loops. If you code an infinite loop, by golly, you should get fined, or go to jail.
Yeah. Well, the question, to put it in a different way, if I succeed in making a bot, autoresponder that's like me and responds to e-mail independent of me. And for example, let's say I'm no longer around, I'm dead, and all that's left is the autoresponder. What are the obligations? How does one think about the autoresponder relative to thinking about the person that the autoresponder represents? What do you think? I think at that point, I haven't actually thought this through properly, but I think if somebody says, “Let's delete the autoresponder,” it’s interesting. What are the moral aspects of doing that?
If your argument is it's the moral equivalent of killing a living person, I would love to hear that logic. You could say that’s a tragedy, that's like burning the Mona Lisa, we would never want to do it. But to say that it’s the equivalent of killing Stephen Wolfram a second time, I mean, I would love to hear that argument.
I don't know if that's right. I have not thought that through. But, my reaction to you saying the computer can't feel pain is, I don't know why on Earth you're saying that. So, let's unpack that statement a little bit. I think it's interesting to unpack. Let's talk a little bit about how brains might work and what the world looks like at a time when we really know, you know, we've solved the problem of neurophysiology, we've solved, sort of, the problem of neuroscience, and we can readily make a simulation of a brain. We've got a simulated brain and it's a simulated Byron, and it's a simulated Stephen and those simulated brains can have a conversation just like we’re having conversation right now. But unlike our brains, it's easy to go look at every neuron firing, basically, and see what's happening. And then we start asking ourselves… The first question is, do you think then that the neuron level simulated brain is capable of feeling pain, and having feelings, and so on? One would assume so.
We would part company on that but I agree that many people would say that.
Well, I can't see how you would not say that unless you believe that there is something about the brain that is not being simulated.
Well, let's talk about that. I assume you're familiar with the OpenWorm project.
The C. elegans is this nematode worm. Eighty percent of all animals on the planet are nematode worms. And they had their genome sequenced, and their brain has 302 neurons.
There is a difference between male and female worms actually. I think the female worm has 4 neurons.
Fair enough. I don't know, that may be the case. Two of the 302, I understand, aren't connected to the other ones. Just to set the problem up, so for 20 years people have said, “Let's just model these 302 neurons in a computer, and let's just build a digital nematode worm.” And of course, not only have they not done it, but there isn’t even a consensus in the group that it is possible. That what is occurring in those neurons may be happening at the Planck level. Your basic assumption in that is that physics is complete and that model you just took of my brain is the sum total of everything going on scientifically, and that is far from proven. In fact, there is more evidence against that proposition.
Let’s talk about this basic problem. Science - a lot of what goes on in science is an attempt to make models of things. Now, models are by their nature incomplete and controversial. That is, “What is a model?” A model is a way of representing and potentially predicting how a system will behave that captures certain essential features that you're interested in, and elides other ones away. Because, if you don't elide some features away, then you just have a copy of the system.
That's what we're trying to do. They're trying to build an instantiation; it's not a simulation.
No, but there is one case in which this doesn't happen. If I'm right that it's possible to make a complete, fundamental model of physics, then that is the one example in which there will be a complete model of something in the world. There’s no approximation, every bit works in the model exactly the way it does in real life. But, above that level, when you are saying, “Oh, I'm going to capture what's going on in the brain with a model,” what you mean by that is, “I'm going to make some model which has a billion degrees of freedom” or something. And that model is going to capture everything essential about what's happening in the brain, but it's clearly not going to represent the motion of every electron in the brain. It's merely going to capture the essential features of what's happening in the brain, and that's what 100 percent of models, other than this one case of modeling fundamental physics, that's what they always do. They always say, “I'm capturing the part that I care about and I'm going to ignore the details that are somehow not important for me to care about.” When you make a model of anything, whether it's a brain, whether it's a snowflake, whether it's a plant leaf or something, any of these kinds of things, it's always controversial. Somebody will say, “This is a great model because it captures the overall shape of a snowflake” and somebody else will say, “No, no, no it's a terrible model because look, it doesn't capture this particular feature of the 3-D structure of ridges in the snowflake.” We're going to have the same argument about brains. You can always say there’s some feature of brains, for example, you might have a simulation of a brain that does a really good job of representing how neuron firings work but, it doesn't correctly simulate if you bash the brain on the side of its head, so to speak, and give it concussion, it doesn't correctly represent a concussion because it isn't something which is physically laid out in three-dimensional space the way that the natural brain is.
But wasn't that your assumption of the problem you were setting up, that you have perfectly modeled Byron’s brain?
That's a good point. The question is, for what purpose is the model adequate? Let's say the model is adequate if listening to it talking over the phone it is indistinguishable in behavior from the actual Byron. But then, if you see it in person and you were to connect eyes to it, maybe the eye saccades will be different or it wouldn't have those, whatever else. Models, by their nature, aren't complete, but the idea of science, the idea of theoretical science is that you can make models which are useful. If you can't make models, if the only way to figure out what the system does is just to have a copy of the system and watch it do its thing, then you can't do theoretical science in the way that people have traditionally done theoretical science.
Let's assume that we can make a model of a brain that is good enough that the brain can, for many purposes that we most care about, can emulate the real brain. So now the question is, “I've got this model brain, I can look at every feature of how it behaves when I ask it a question, or when it feels pain or whatever else.” But now the question is when I look at every detail, what can I say from that? What you would like to be able to say is to tell some overarching story. For example, “The brain is feeling pain.” But, that is a very complicated statement. What you would otherwise say is, there's a billion neurons and they have this configuration of firings and synaptic weights, and God knows what else. Those billion neurons don't allow you to come up with ‘a simple to describe story’, like, “The brain is feeling pain.” It's just, here’s a gigabyte of data or something; it represents the state of the brain. That doesn't give you the human level story of “the brain is feeling pain.” Now, the question is, will there be a human level story to be told about what's happening inside brains? I think that's a very open question. So, for example, take a field like linguistics. You might ask the question, how does a brain really understand language? Well, it might be the case that you can, sort of, see the language coming in, you can see all these little neuron firings going on and then, at the end of it, some particular consequence occurs. But then the question is, in the middle of that, can you tell the story of what happened?
Let me give you an analogy which I happen to have been looking at recently which might at first seem kind of far-fetched, but I think is actually very related. The analogy is mathematical theorems. For example, I've done lots of things where I've figured out mathematical truths using automated theorem proving. One, in particular, I did 20 years ago of finding the simplest axiom system for logic, for Boolean algebra. This particular proof generated automatically, it’s 130 steps, or so. It involves many intermediate stages, many lemmas. I've looked at this proof, off and on for 20 years, and the question is, can I tell what on Earth is going on? Can I tell any story about what's happening? I can readily verify that, yes, the proof is correct, every step follows from every other step. The question is, can I tell somebody a humanly interesting story about the innards of this proof? The answer is, so far, I've completely failed. Now, what would it take for that to be such a story? Kind of interesting. If some of the lemmas that showed up in the intermediate stages of that proof were, in a sense, culturally famous, I would be in a much better position. That is when you look at a proof that people say, “Oh, yeah, this is a good proof of some mathematical theorem.” A lot of it is, “Oh, this is Gauss'” such and such theorem. “This is Euler’s” such and such theorem. That one's using different stages in the proof. In other words, those intermediate stages are things about which there is a whole, kind of, culturally interwoven story that can be told, as opposed to just, “This is a lemma that was generated by an automatic theorem improving system. We can tell that it's true but we have no idea what it's really about, what it's really saying, what its significance is, what its purpose is,” any of these kinds of words.
That's also, by the way, the same thing that seems to be happening in the modern neural nets that we're looking at. Let’s say we have an image identifier. The image identifier, inside itself, is making all kinds of distinction saying, “This image is of type A. This is not of type B.” Well, what is A and B? Well, it might be a human describable thing. “This image is very light. This image is very dark. This image has lots of vertical stripes. This image has lots of horizontal stripes.” They might be descriptors of images for which we have developed words in our languages, in our human languages. In fact, they're probably not. In fact, they are, sort of, emergent concepts which are useful, kind of, symbolic concepts at an intermediate stage of the processing in this neural net but they're not things for which we have in our, sort of, cultural development generated, produced, chosen to describe those concepts by words and things. We haven't provided the cultural anchor for that concept. I think the same thing is true -- so, the question is, when we look at brains and how they work and so on, and we look at the inner behavior and we've got a very good simulation, and we see all this complicated stuff going on, and we generate all this data, and we can see all these bits on the screen and so on. And then we say, “OK, well, what's really going on?” Well, in a sense then we're doing standard natural science. When we're confronted with the world we see all sorts of complicated things going on and we say, “Well, what's really going on?” And then we say, “Oh, well, actually there's this general law” like the laws of thermodynamics, or some laws of motion, or something like this. There's a general law that we can talk about, that describes some aspect of what's happening in world.
So, a big question then is, when we look at brains, how much of what happens in brains can we expect to be capable of telling stories about? Now, obviously, when it comes to brains, there's a long history in psychology, psychoanalysis etc. that people have tried to make up, essentially, stories about what's happening in brains. But, we’re kind of going to know at some point. At the bit level we're going to know what happens in brains and then the question is, how much of a story can be told? My guess is that that story is actually going to be somewhat limited. I mean, there's this phenomenon I call ‘computational irreducibility’ that has to do with this question of whether you can effectively make, sort of, an overarching statement about what will happen in the system, or whether you do just have to follow every bit of its behavior to know what it's going to do. One of the bad things that can happen is that, we have our brain, we have our simulated brain and it does what it does and we can verify that, based on every neuron firing, it's going to do what we observe it to do but then, when we say, “Well, why did it do that?” We may be stuck having no very good description of it.
This phenomenon is deeply tied into all kinds of other fundamental science issues. It's very much tied into Gödel’s theorem, for example. In Gödel’s theorem, the analogy is this: when you say, “OK, I'm going to describe arithmetic and I'm going to say arithmetic is that abstract system that satisfies the following axioms.” And then you start trying to work out the consequences of those axioms and you realize that, in addition to representing ordinary integers, those axioms allow all kinds of incredibly exotic integers, which, if you ask about certain kinds of questions, will give different answers from ordinary integers. And you might say, “OK, let's try to add constraints. Let's try to add a finite number of axioms that will lock down what's going on.” Well, Gödel’s theorem shows that you can't do that. It’s the same sort of mathematical structure, scientific structure as this whole issue of, you can't expect to be able to find simple descriptions of what goes on in lots of these kinds of systems. I think one of the things that this leads to is the fact that, both in our own brains and in other intelligences, other computational intelligences, that there will be lots of kinds of inner behavior where we may not ever have an easy way to describe in large-scale symbolic terms, the kinds of things going on. And it's a little bit shocking to us that we are now constructing systems that, we may never be able to say, in a sort of human understandable way, what's going on inside these systems. You might say, “OK, the system has produced this output. Explain that output to me.” Just like, “The following mathematical theorem is true. Explain why it's true.” Well, you know, if the “why it's true” comes from an automated theorem prover, there may not be an explanation that humans can ever wrap their brains around about that. The main issue is, you might say, “Well, let's just invent new words and a language to describe these new lumps of computation that we see happening in these different systems.” The problem, and that's what one saw even from Gödel’s theorem, the problem is that the number of new concepts that one has to invent is not finite. That is, as you keep on going, you keep on looking at different kinds of things that brains, or other computational systems can do, that it's an infinite diversity of possible things and there won't be any time where you can say, “OK, there's this fixed inventory of patterns that you have to know about and that you can maybe describe with words and that's all you need, to be able to say what's going to happen in these systems.”
So, as AIs get better and we personify them more and we give them more ability, and they do actually seem to experience the world, whether they do or they don't, but they seem to, at what point, in your mind, can we no longer tell them what to do, can we no longer have them go plunge our toilet when it stops up. At what point are they afforded rights?
Well, what do you mean by ‘can’?
Well, ‘can’ as in, I mean, surely we can coerce but, I mean, ethically ‘can’.
I don't know the answer to that. Ethics is defined by the way people feel about things. In other words, it's not the case, there is no absolute ethics.
Well, OK. Fair enough. I'll rephrase the question. I assume your ethics preclude you from coercing other entities into doing your bidding. At what point do you decide to stop programming computers to do your bidding.
And at what point do I let them do what they want, so to speak?
When do I feel that there is a moral need to let my computer do something just because? Well, let me give you an example. I’ve often have computers do complicated searches for things that take months of CPU time. How do I feel about cutting the thing off moments before it might have finished? Well, I usually don't feel like I want to do that. Now, do I not want to do that purely because I want to get the result? Or do I feel some kind of feeling, “Oh my gosh, the computer has done so much work? I don't just want to cut it off.” I'm not sure actually.
Do you still say thank you to the automatic ticket thing when you leave the parking garage?
Yes, to my children’s great amusement. I have made a principle of doing that for a long time.
Stephen, I don’t know how to say this, but I think maybe you’ve been surrounded by the computers so much that you kind of have Stockholm Syndrome and identify with them.
More to the point you might say I've spent so much time thinking about computation, maybe I’ve become computation myself as a result. Well, in a certain sense, yes, absolutely, that's happened to me, in the following sense. We think about things and how do we form our thoughts? Well, some philosophers think that we use language to form our thoughts. Some think thoughts are somewhat independent of language. One thing I can say for sure, I’ve spent some large part of my life doing computer language design, building Wolfram language system and so on, and absolutely, I think in patterns that are determined by that language. That is, if I try to solve a problem, I am, both consciously and subconsciously, trying to structure that problem in such a way that I can express it in that language and so that I can use the structure of that language as a way to help me understand the problem.
And so absolutely it’s the case that as a result of basically learning to speak computer, as a result of the fact that I formulate my thoughts in no small way, using Wolfram language, and using this computational language, absolutely, I probably think about things in a different way than I would if I had not been exposed to computers. Undoubtedly, that kind of structuring of my thinking is something that affects me, probably more than I know, so to speak. But, in terms of whether I think about people, for example, like I think about computational systems. Actually, most of my thinking about people is probably based on gut instinct and heuristics. I think that the main thing that I might have learned from my study of computational things is that there aren't simple principles when it comes to looking at the overall behavior of something like people. If you dig down you say, “How do the neurons work?” We may be able to answer that. But the very fact that this phenomenon of computational irreducibility happens, it’s almost a denial of the possibility that there is going to be a simple overall theory of, for example, people's actions, or certain kinds of things that happen in the world so to speak. People used to think that when we applied science to things, that it would make them very cut and dried. I think computational irreducibility shows that that's just not true, that there can be an underlying science one can understand how the components work and so on, but it may not be that the overall behavior is cut and dried. It's not like the kind of 1950s science fiction robots where the thing would kind of start having smoke come out of its ears if there were some logical inconsistency that it detected in the world. This kind of simple view of what can happen in computational systems is just not right. Probably, if there's one thing that's come out of my science, in terms of my view of people at that level, it's that, no, I doubt that I'm really going to be able to say, “OK, if this then that”, you know, kind of apply very simple rules to the way that people work.
But, hold on a second, I thought the whole way you got to something that looked like free will, I thought your thinking was, “No, there isn’t, but the thing is the number of calculations you would have to do to predict the action is so many you can't do it, so it's effectively freewill but it isn’t really.” Do you still think that?
That’s correct. Absolutely, that’s what I think.
But the same would apply to people.
In a sufficiently large enough computer you would be able to…
Yes, but the whole point is, as a practical matter in leading one's life, one isn't doing that, that's the whole point.
But to apply that back to your Byron’s brain feeling pain, couldn't that be the same sort of thing it's like, “Well, yeah, maybe that’s just calculation, but the amount of calculation that would have to happen for a computer to feel pain is just not calculable.”
No. There’s a question of how many neurons, how much accuracy, what's the cycle time etc. But we're probably coming fairly close and we will, in coming years, get decently close to being able to emulate, with digital electronics, the important parts of what happens in brains. You might always argue, “Oh, it's the microtubules on every neuron that are really carrying the information” Maybe that's true, I doubt it. And that’s many orders of magnitude further than what we can readily get to with digital electronics over the next few years.
But, either you can model a brain and know what I'm going to say next, and know that I felt pain, or you can’t, and you can preserve some semblance of free will.
No, no, no. Both things are true. You can absolutely have free will even if I can model your brain at the level of knowing what you will say next. If I do the same level of computation that your brain is doing, then I can work out what you will say next. But the fact is, to do that, I effectively have to have a brain that's as powerful as your brain, and I have to be just following along with your brain. Now, there is a detail here which is this question of levels of modeling and so on, and how much do you have to capture, and do you have to go all the way down to the atoms, or is it sufficient to just say, “Does this neuron fire or not?” And yes, you're right, that sort of footnote to this whole thing, when I say, “How much free will?” Well, free enough will that it takes a billion logical machine operations to work out whether you will say true or false. If it takes a billion operations to tell when you are going to say true or false, should one say that you are behaving as if you have free will, even though were you to do those billion operations you can deterministically tell you're going to say true, or you're going to say false? As a practical matter, in interacting with you, I would say you’re behaving as if you have free will because I can't immediately do those billion operations to figure out the answer. In a future world where we are capable of doing more computation more efficiently, for example, we may eventually have ways to do computation that are much more efficient than brains. And, at that point, we have our simulated brains, and we have our top of the line computers made at an atomic scale or whatever else. And, yes, it may very well be the case, that as a practical matter, the atomic scale computers out-compute simulated brains by factors of trillions.
I’ll only ask one more question along this line, because I must be somewhat obtuse. I’m not a very good chess player. If I download a program on my iPad, I play at level four out of ten, or something. So, say I flip it up to level five. I don't know what move the computer is going to make next because it's going to beat me. I don't have a clue what it's going to move next, that's the whole point. And yet, I never think, “Oh, because I don't know, it, therefore, must have free will.”
That's true. You probably don't think that. Depends on what the computer is doing. There's enough of a background of chess playing that that's not an immediate question for you. If the computer was having a conversation with you, if suddenly, in 2017, the computer was able to have a sort of Turing Test complete conversation with you, I think you would be less certain. I think that, again, there is a progression of -- an awful lot of what people believe and how people feel about different things, does the computer have consciousness, does it blah blah? An awful lot of that, I think, ends up coming about because of the historical thread of development that leads to a particular thing.
In other words, imagine -- it’s an interesting exercise -- imagine that you took a laptop of today back to Pythagoras or something. What on earth would he think of it? What would he think it was? How would he describe it? I wondered about this at some point. My conclusion is they’d start talking about, “What is this thing? It's like disembodied human souls.” Then you explain, “Well, no it's not really disembodied human soul.” They say, “Well, where did all these things that are on the computer come from?” “Well, they were actually put there by programmers but then the computer does more stuff.” And it gets very complicated. I think it’s an interesting thought experiment to imagine at a different time in history. Pythagoras is a particularly interesting case because of his thinking about souls and his thinking about mathematics. But, it's to imagine what, at a different time in history, would somebody imagine the technology of today was actually like? And that helps us to understand the extent to which we are prisoners of our particular time in history. To take the thought experiment of, what if we have a super, in some sense, computer that can predict what our brains do, a trillion times faster than our brains actually do it. How will that affect our view of the world? My guess is that what will actually happen if that happens, and it presumably will in some sense, we will have by that time long outsourced much of our thinking to machines that just do it faster than we do. Just like we could decide that we're going to walk everywhere we want to go. But actually, we outsource much of our transportation to cars and airplanes and things like that, that do it much faster than we do it. You could say, “Well, you're outsourcing your humanity by driving in a car.” Well, we don't think that anymore because of the particular thread of history by which we ended up with cars. Similarly, you might say, “Oh my gosh, you're outsourcing your humanity by having a computer think for you.” In fact, that argument comes up when people use the tools we've built to do their homework or whatever else. But, in fact, as a practical matter, people will increasingly outsource their thinking processes to machines.
And then the question is, and that sort of relates to what I think you are going to ask about, should humans be afraid of AIs, and so on. That sort of relates to, well, where does that leave us, humans, when all these things, including the things that you still seem to believe, are unique and special for humans but I'm sure they're not, when all of those things have been long overtaken by machines, where does that leave us? I think the answer is that you can have a computer sitting on your desk, doing the fanciest computation you can imagine. And it's working out the evolution of Turing machine number blah blah blah, and it's doing it for a year. Why is it doing that? Well, it doesn't really have a story about why it's doing it. It can't explain its purpose because, if it could explain it, it would be explaining it in terms of some kind of history, in terms of some kind of past culture of the computer so to speak. The way I see it is, computers on their own simply don't have this notion of purpose is something that is, in a sense, one can imagine that the weather has a purpose that it has for itself. But this notion of purpose that is connected to what we humans do, that is a specific human kind of thing, that's something that nobody gets to automate. It doesn't mean anything to automate that. It doesn't mean anything to say, “Let's just invent a new purpose.” We could pick a random purpose. We can have something where we say, “OK, there are a bunch of machines and they all have random purposes.” If you look at different humans, in some sense there’s a certain degree of randomness and there are different purposes. Not all humans have the same purposes. Not all humans believe the same things; have the same goals, etc. But if you say, “Is there something intrinsic about the purpose for the machines?” I don't think that question really means anything. It ultimately reflects back on the thing I keep on saying about the thread of history that leads humans to have and think about purposes in the ways that they do.
But if that AI is alive, you began by taking my question about what is life and if you get to a point where you say, “It's alive” then we do know that, living things, their first purpose is to survive. So, presumably, the AI would want to survive, and then their second purpose is to reproduce, their third purpose is to grow. They all naturally just flow out of the quintessence of what it means to be alive. “Well, what does it mean for me to be alive?” It means for me to have a power source. “OK, I need a power source. Ok, I need mobility.” And so it just creates all of those just from the simple fact of being alive.
I don't think so. I think that you're projecting that onto what you define as being alive. I mean, it is correct, there is, in a sense, one 0th level purpose, which is, you have to exist if you want to have any purpose at all. If you don't exist then everything is off the table. The question of whether a machine, a program or whatever else has a desire, in some sense, to exist. That's a complicated question. I mean it's like saying, “Are there going to be suicidal programs?” Of course. There are right now. Many programs, their purpose is to finish, terminate and disappear. And that's much rarer, perhaps, fortunately, for humans.
So, what is the net of all of this to you then? You hear certain luminaries in the world say we should be afraid of these systems, you hear dystopian views about the world of the future. You've talked about a lot of things that are possible and how you think everything operates but what do you think the future is going to be like, in 10 years, 20, 50, 100?
What we will see is an increasing mirror on human condition, so to speak. That is, what we are building are things that essentially amplify any aspect of the human condition. Then it, sort of, reflects back on us. What do we want? What are the goals that we want to have achieved? It is a complicated thing because certainly AIs will in some sense run many aspects of the world. Many kinds of systems, there's no point in having people run them. They're going to be automated in some way or another. Saying it's an AI is really just a fancy way of saying it's going to be automated. Another question is, well what are the overall principles that those automated systems should follow? For example, one principle that we believe is important right now, is the ‘be nice to humans’ principle. That seems like a good one given that we're in charge right now, better to set things up so that it's like, “Be nice to humans.” Even defining what it means to be nice to humans is really complicated. I've been much involved in trying to use Wolfram language as a way of describing lots of computational things and an increasing number of things about the world. I also want it to be able to describe things like legal contracts and, sort of, desires that people have. Part of the purpose of that is to provide a language that is understandable both to humans and to machines that can say what it is we want to have happen, globally with AIs. What principles, what general ethical principles, and philosophical principles should AIs operate under? We had the Asimov's Laws of Robotics, which are a very simple version of that. I think what we're going to realize is, we need to define a Constitution for the AIs. And there won't be just one because there aren't just one set of people. Different people want different kinds of things. And we get thrown into all kinds of political philosophy issues about, should you have an infinite number of countries, effectively, in the world, each with their own AI constitution? How should that work?
One of the fun things I was thinking about recently is, in current democracies, one just has people vote on things. It’s like a multiple-choice answer. One could imagine a situation in which, and I take this mostly as a thought experiment because there are all kinds of practical issues with it, in a world where we're not just natural language literate but also computer language literate, and where we have languages, like Wolfram language which can actually represent real things in the world, one could imagine not just voting, I want A, B, or C, but effectively submitting a program that represents what one wants to see happen in the world. And then the election consists of taking X number of millions of programs and saying “OK, given these X number of millions of programs, let's apply our AI Constitution to figure out, given these programs how do we want the best things to happen in the world.” Of course, you're thrown into the precise issues of the moral philosophers and so on, of what you then want to have happen and whether you want the average happiness of the world to be higher or whether you want the minimum happiness to be at least something or whatever else. There will be an increasing pressure on what should the law-like things, which are really going to be effectively the programs for the AIs, what should they look like. What aspects of the human condition and human preferences should they reflect? How will that work across however many billions of people there are in the world? How does that work when, for example, a lot of the thinking in the world is not done in brains but is done in some more digital form? How does it work when there is no longer… the notion of a single person, right now that's a very clear notion. That won't be such a clear notion when more of the thinking is done in digital form. There’s a lot to say about this.
That is probably a great place to leave it. I want to thank you, Stephen. Needless to say, that was mind-expanding, would be the most humble way to describe it. Thank you for taking the time and chatting with us today.
Sure. Happy to.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.