Episode 6: A conversation with Nick Bostrom

In this episode Byron and Nick talk about human consciousness, superintelligence, agi, and the future of jobs and more.

-
-
0:00
0:00
0:00

Guest

Nick Bostrom is a Swedish philosopher at the University of Oxford. He is most noted for his work on existential risk, human enhancement ethics, superintelligent risk and more. Bostrom is also the author of "Superintelligence: Paths, Dangers, Strategies," a New York Times bestseller.

Transcript

Byron Reese: This is Voices in AI, brought to you by Gigaom, I’m Byron Reese. Today our guest is Nick Bostrom. He’s a Swedish philosopher at the University of Oxford known for his work on superintelligence risk. He founded the Strategic Artificial Intelligence Research Center at Oxford, which he runs, and is currently the Founding Director of the Future of Humanity Institute at Oxford as well. He’s the author of over 200 publications, including Superintelligence: Paths, Dangers, and Strategies, a New York Times bestseller. Welcome to the show, Nick.

Nick Bostrom: Hey, thanks for having me.

So let’s jump right in. How do you see the future unfolding with regards to artificial intelligence?

I think the transition to the machine intelligence era will be perhaps the most important thing that has ever happened in all of human history when it unfolds. But that varies, considerable uncertainty as to the time scales.

Ultimately, I think we will have full human level general artificial intelligence, and shortly after that probably superintelligence. And this transition to the machine superintelligence era, I think has enormous potential to benefit humans in all areas. Health, entertainment, the economy, space colonization, you name it. But there might also be some risks, including existential risks associated with creating, bringing into the world, machines that are radically smarter than us.

I mean it’s a pretty bold claim when you look at two facts. First, the state of the technology. I don’t have any indication that my smartphone is on a path to sapience, one. And two, the only human level artificial intelligence we know of is human intelligence. And that is something, coupled with consciousness and the human brain and the mind and all of that, that, to say we don’t understand it is an exercise in understatement. So how do you take those two things — that we have no precedent for this and we have no real knowledge of how human intelligence works — and then you come to this conclusion that this is all but inevitable?

Well it’s certainly the case that we don’t yet have human level general artificial intelligence, let alone superintelligence, and we probably won’t have in a while. But ultimately it seems to be possible. I mean, we know from the existence of the human brain that there can exist, systems that have at least human level intelligence, and it’s a finite biological system. Three pounds of sort of squishy matter inside craniums can achieve this level of performance. There is no reason to think that’s anywhere close to the maximum. And we can see several different paths by which we could technologically eventually get to the point where we can build this in machine substrates.

So one would be indeed to reverse engineer the human brain, to figure out what architectures it uses, what learning algorithms, and then run that in computer substrate. But it might well be that we will get there faster by adopting a purely synthetic approach.

There just seems to be no particular barrier along this path that it would be in principle impossible to overcome. It’s a difficult problem, but we’ve only been hacking away at it since- I mean, we’ve only really had computers since just before the middle of the last century. And then the field of AI is quite young, maybe since 1956 or so. And in these few decades we’ve already come a pretty long way, and if we continue in this way we will eventually, I think, figure out how to craft algorithms that have the same powerful learning and planning ability that makes us humans smart.

Well let’s dig on that for just one more minute and then let’s move on accepting that assumption. Where do you think human consciousness comes from?

The brain.

But specifically, what mechanism gives rise to it? What would even be a potential answer to that question?

Well, so… I tend towards a computationalist view here, which is that… My guess is that it’s the running of a certain type of computation that would generate consciousness in the sense of morally relevant subjective experience, and that in principle you could have consciousness implemented on structures built out of silicon atoms just as well as structures built out of carbon atoms. It’s not the material that is the crucial thing here, but the structure of the computation being implemented.

So that means, in principle, you could have machines being conscious. I do think, though, that the question of artificial intelligence, the intellectual performance of machines, is often best approached without also immediately introducing the question of consciousness. Even if you thought machines could not really be conscious, you could still ask whether they will be very intelligent. And even if they were only intelligent but not conscious, that still could be a technology with enormous impacts on the world.

So the last question I’ll ask you along those lines, and then I would love to just dive down into some specifics of how you see all of this unfolding, is you’re undoubtedly familiar with Searle’s thought experiment about the Chinese room. But I’ll say it real briefly for the audience, who may not be on it. That there exists a person hypothetically who’s in this enormously large room that’s full of an enormous number of these very special books. And the important thing to know about this man is he speaks no Chinese whatsoever, and yet people slide questions under the door to him written in Chinese. He picks them up, he looks at the first character, he goes and finds the book with that on the spine. He turns, finds the second character. He follows all the way through the message until he gets to a book that says write this down. He copies it. Again, he doesn’t know if he’s talking about Cholera or coffee beans or what. And then he slides the answer back under the door. And somebody, a Chinese speaker, reads it and it’s just brilliant. I mean, it’s like a perfect answer.

And so the question is- The analogy obviously is that that room, that system, passes the Turing test splendidly. And yet, it does so without understanding anything about what it’s talking about. And that this lack of understanding, this fact that it cannot understand something, is a really concrete limit to what it is able to do, in the sense that it really can’t think and understand in the way we do. And that analogy is of course what a computer does.

And so, Searle uses it to conclude that a computer can never really be like a human because it doesn’t understand anything. How do you answer that?

Well I’m not very convinced about it, that’s for sure. I mean for a start you need to think in this thought experiment not just about what the human inside this room can or cannot do or understand or doesn’t understand, but you have to think about the system as a whole. So the room plus all these books in the room plus the human, as an entity is able to map out inputs to outputs in a way that appears quite smart from the outside. If anything has understanding here, it would presumably be the system. Just as you would say a computer program, it would be the entire thing, the computer and the program and its memory that would achieve a certain level of performance, not a particular data box inside this device.

Right. The traditional answer to that though is, okay, the guy memorizes the content of every book. He’s out walking around, somebody hands him a note, and he writes a brilliant answer and hands it back. Again, he doesn’t understand it. But you can’t kind of go back to, “It’s the system.”

So then you have to think about, realistically, if it’s really that the function you want to capture is one that would map all possible English inputs to Chinese outputs. To learn that mapping by just having a big lookup table would be infeasible just in terms of the number of entries. It certainly wouldn’t fit into a human brain. Or indeed, into any sort of physically confined, there wouldn’t be enough atoms in the observable universe to implement it in that way.

And so it might well be that understanding includes not just the ability to map a certain set of inputs to outputs, but to do that in a certain way that involves data compression. So that to understand something might be to know what the relevant patterns are, the regularities, in a particular domain — maybe, you know, have some kind of mental model of that domain — and therefore achieve a compactification of this input/output mapping. And that allows you to generalize to things that were not explicitly listed in the initial set as well.

So one way of implementing this Chinese room argument, if we tried to do it through a lookup table, well A, it would be impossible because there just isn’t enough memory and couldn’t be. And B, even if you somehow magically could have enough memory maybe it still wouldn’t count as true understanding because it lacks this compression, the extraction of regularities.

So people who are concerned about a superintelligence broadly had three concerns. One is that it’s misused deliberately by humans. Second one is that it’s accidentally misused by humans, and the third one is that it somehow gets a will or volition of its own and has goals that are contrary to human goals. Are you concerned about all three of those, or any one in particular? Or how do you shake that out?

Yeah, I think there are challenges in each of these areas. I think that the one you listed last, it is in a sense the first one. That is, we will need- By the time we figure out how to make machines truly smart, we will need to have figured out ways to align them with human goals and intentions so that we can get them to do what we want.

So right now you can define an objective function. In many cases it’s quite easy. If you want to train some agent to play chess, you can define what good performance is. You get a 1 if you win a game and a 0 if you lose a game, and 1/2 a point perhaps if you make a draw. So that’s an objective we can define. But in the real world, all things considered, we humans care about things like happiness and justice and beauty and pleasure. None of those things are very easy to sort of sit down and type out a definition in C++ or Python. So you’d need to figure out a way to get potentially superintelligent agents to nevertheless service an extension of the human will, so that they would realize what your intentions are, and then be able to execute that faithfully.

That’s a big technical research challenge that there are now groups bringing up and pursuing that technical research. And assuming that we can solve that technical control problem then, we get the luxury of confronting these wider policy issues. Like who should decide what this AI is used for, what social purposes should it be used for, how do we want this future world with superintelligence to look like.

Now, you need to ultimately I think be successful both on this narrow technical problem and on these wider policy problems to really get a good outcome. But I think they both seem kind of important and challenging. Like you divided the policy into two sub-problems, I think you distinguish between deliberate misuse and accidental misuse. I’m not sure precisely what you had in mind there, but it sounds like we want to make sure that neither of those happens.

Any existential threat to humanity kind of gets our attention. Is it your view that there’s a small chance, but because it’s such a big deal we really need to think about this? Or do you think there’s an incredible large chance that that’s going to happen?

Somewhere in between. I think that there’s enough of a chance both that we will develop superintelligence, perhaps within the lifetime of people alive today, or some people alive today, and that things could go wrong. Enough of a chance of that happening that it is a very wise investment of some research funding and some research talent to have some people in the world starting to figure out the solution to this problem of scalable control, as it’s now starting to happen. And perhaps also to have some people thinking ahead about the policy questions. Like what kind of global governance system could really cope with a world where there are superintelligent entities?

To appreciate that this is a big profound challenge, I think when you’re talking about general superintelligence you’re not just talking about advances in AI, you’re talking about advances in all technical fields. You’re really, I think- At that point, when you have AIs being better able to do research in science and innovation than we humans can do, then you have a kind of telescoping of the future, so that all those other possible technologies that you could imagine the human species developing in the fullness of time. If we had 40,000 years to work on it, maybe we would have a cure for aging or the ability to effectively colonize the galaxy or upload it to computers, all these kinds of science fiction-like technologies that we know are possible given the laws of physics, but just very hard to develop, that we could have developed in the fullness of time. All of those might be developed quite soon after you have superintelligence conducting this development at digital time scales.

So you have, with this transition, potentially within short order, the arrival of something like technological maturity where we have this whole suite of science fiction-like super powers. And I think to construe a kind of governance system that works for that very different world will require some fundamental rethink. And that’s also some work that perhaps makes sense to start in advance.

And I think that the case for thinking that we should start that work in advance does not depend super sensitively on exactly how big you think the probability is that this will happen within a certain number of years. It seems that there’s enough probability that it sure makes sense, if nothing else as an insurance policy, for some humans to work on this.

Do you think we’re up to the challenge to rethink these fundamental structures of society? Do you have any precedent in human history for some equivalent thing being done?

Nothing very closely equivalent. You can still reach out and try to find some more distant analogies. Perhaps in certain respects the invention of nuclear weapons captures some parallels there, where there was the realization, including among some of the nuclear physicists developing this, that it would really change the nature of international affairs. And people anticipated subsequent arms races and such, and there was some attempt to think ahead about that, how you could try to do anti-proliferation.

Other than that, I don’t think that humanity has a very great track record of thinking ahead about where it wants to go, to anticipating problems and then taking proactive measures to avoid them. Like, most of the time we just stumble along, try different things, and gradually we learn. We learn that cars sometimes crash, so we invent seat belts and street lights. We figure out things as we go along. But the problem with existential risks is that you really only get one chance, so you’ve got to get it right on the first time. And that might require seeing the problem in advance and avoiding it. And that’s what we are not very good at, as a civilization, and hence need for an extra effort there.

Do you think in terms of the pathway to building an AGI that we’re on an evolutionary path already? Is it like, “Yeah, we kind of know. We have the basic technologies and all of that. What we just need to do is faster machines, better algorithms, more data, and all these other things, and that will eventually give us an AGI.” Or do you think that it’s going to require something that we don’t even understand right now like a quantum computer, and how that might lead to one or not. Are we on the path or not?

I don’t think it will require a quantum computer. Maybe that would help, but I don’t think that’s necessary. I mean if you said faster compute, more data, and better algorithms, I think in combination that would be sufficient. The question I guess is just how much better algorithms.

So there’s great excitement of course in the progress that’s made in recent years in machine learning, with deep learning and reinforcement learning. I think the jury’s still out whether- Basically we have most of the fundamental building blocks, and maybe we just need some clever ways of combining what we have and build things on top of them, or whether there will have to be deep other conceptual breakthroughs. That is hard to anticipate.

Certainly there will have to be further dramatic algorithmic improvements to get all the way there. But it might be that the further improvements might be more, sort of, ways of using some of the basic building blocks we have and putting them together in interesting ways. To maybe build on top of deep learning structures ways to better learn and extract concepts and combine them in reasoning, and then use that to learn language. Ways to do better hierarchical planning. But that it still will use some of the building blocks we already have as opposed to something that kind of sweeps the table clean and starts over with a radical new approach. I think at least there’s some credence that we’re on the right path with these current advances that we’re seeing.

To the best of my knowledge, as I try to figure out when different people think we’re going to get an AGI, and looking at people who are in the industry who have written some code or something, I get a range between five and five-hundred years, which I think is a pretty telling fact alone. But you undoubtedly know that there are people in the industry who don’t think that this is a particularly useful use of thought resources and cycles right now. Where do you think, broadly speaking, just in generalizations, where do you think people who dismiss all of this, where do they err? What are they missing?  

Well they might err by being overconfident in their impression being correct. So it depends a little bit on what precisely it is that they believe. If they believe, say, that there is basically zero chance that we will have this in the next several hundred years, then I think they’re just overconfident. This is not the kind of thing that humans have a great track record of predicting what technological advances are or are not possible over century time scales. And so it would be radical overconfidence to do that. Also, it would be in disagreement with the median opinion among their expert peers.

We did some opinion surveys of world-leading machine learning experts a couple of years ago. And one of the questions we asked was, by which year do you think there is a 50% probability that we will have high level or human level machine intelligence, defined as AI that can do everything that humans can do? And the median answer to that was 2040 or 2050, depending on which group of experts we asked. Now, these are subjective opinions of course.

There’s no sort of rigorous dataset from which you can prove statistically that that’s the correct estimate. But it does show, I think, that the notion that this could happen in this century, indeed by mid-century and indeed in the lifetime of a lot of people listening to this program, is not some outlandish opinion that nobody who actually knows this stuff believes. But on the contrary, it’s sort of the median opinion among leading researchers. But of course there’s great uncertainty on this. It could take a lot longer. It could also happen sooner. And we just need to learn to think in terms of probability distributions, credence distributions over a wide range of possible arrival dates.

Even absent of AGI, even in the intermediate time before we have one, what’s your prognosis about the number one fear people have about artificial intelligence, which is its impact on jobs and employment?

The goal is full unemployment. Ultimately what you want are systems that can do everything that humans can do so that we don’t have to do it.

I think that that will create two challenges. One is the economic challenge of how do people have a livelihood. Right now a lot of people are dependent on wage income, and so they would need some other source of income instead, either from capital or from social redistribution. And fortunately I think that in that kind of scenario, where AI really achieves intelligence, it’s going to create a tremendous economic boom. And so the overall size of the pie would grow enormously. So it should be relatively feasible, given political will, to even by redistributing a small part of that, through universal basic income or other means, to make sure that everybody could have high levels of prosperity.

Now, that then leaves the second problem, which is meaning. At the moment a lot of people think that their self-worth, their dignity, is tied up with their roles as economically productive citizens, as breadwinners to the family. That would have to change. The education system would have to change to train people to find meaning in leisure, to develop hobbies and interests. The art of conversation, interests in music and hobbies, in all kinds of things. And to learn to do things for their own sake, because they’re valuable and meaningful, rather than as a means to getting money to do something else.

And I think that is definitely possible. There are groups who have lived in that condition historically. Aristocrats in the UK, for example, thought it was demeaning to have to work for a living. It was almost like prostituting yourself, like you had to sell your labor. The high status thing was not to have to work. Today we’re in this weird situation where the higher status people are the ones who work the most. It’s like the entrepreneurs and CEOs who work eighty hour weeks. But that could change, but it would require a cultural transition.

Finally, your book Superintelligence is arguably one of the most influential books on the topic ever written. And you’ve taken it upon yourself to kind of sound this alarm and say we need to think seriously about this and we need to put in safeguards and all of that. Can you close with maybe a path that you- a way we maybe get through it and things work out really well for humanity, and we live happily ever after?

Yeah. Since the book came out, in the last couple of years there has been a big shift actually, both in the global conversation around these topics and also in technical research now starting to be carried out on this alignment problem, on the problem of finding scalable control methods that could be used for very, very advanced artificial agents. And so there are a number of research groups bringing that up. We are doing some of the technical research here. There are groups in Berkeley where we are having regular research seminars with DeepMind. So that’s encouraging.

And hopefully the problem will turn out not to be too hard. In that case, I think what this AI transition does is really unlock a whole new level. It kind of enables human civilization to transition from the current human condition to some very different condition. The condition of technological maturity, maybe a post-human condition where our descendants can colonize the universe, build millions of flourishing civilizations of superhuman minds that live for billions of years in states of bliss and ecstasy, exploring spaces of experience, modes of being and interaction with one another and creative activities that are maybe literally beyond the human brain’s ability to imagine.

I just think that in this vast space of possibilities, there are some modes of being that would be extremely valuable. It’s like a giant stadium or a cathedral where we are like a little child crouching in one corner, and that’s this space of possible modes of things that are accessible to a biological human organism given our current conditions. It’s just a small, small fraction of all the possibilities that exist that are currently close to us, but that we could start to unlock once we figure out how to create these artificial intellects and artificial minds that could then help us.

And so with enough wisdom and a little bit of luck, I think the future could be wonderful, literally beyond our ability to dream.

All right. Well, you keep working on making it that way. I thank you so much for your time, Nick, and any time you want to come back and continue the conversation you’re more than welcome.

Super. Thank you.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here