Episode 6: A conversation with Nick Bostrom

In this episode Byron and Nick talk about human consciousness, superintelligence, agi, and the future of jobs and more.



Nick Bostrom is a Swedish philosopher at the University of Oxford. He is most noted for his work on existential risk, human enhancement ethics, superintelligent risk and more. Bostrom is also the author of "Superintelligence: Paths, Dangers, Strategies," a New York Times bestseller.


Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today our guest is Nick Bostrom. He’s a Swedish philosopher at the University of Oxford, known for his work on superintelligence risk. He founded the Strategic Artificial Intelligence Research Center at Oxford, which he runs, and is currently the Founding Director of the Future of Humanity Institute at Oxford as well. He’s the author of over two hundred publications, including Superintelligence: Paths, Dangers, and Strategies, a New York Times bestseller. Welcome to the show, Nick.  

Nick Bostrom: Hey, thanks for having me.

So let’s jump right in. How do you see the future unfolding with regards to artificial intelligence?  

I think the transition to the machine intelligence era will be perhaps the most important thing that has ever happened in all of human history when it unfolds. But that varies, [there is] considerable uncertainty as to the time scales.

Ultimately, I think we will have full human-level general artificial intelligence, and shortly after that probably superintelligence. And this transition to the machine superintelligence era, I think has enormous potential to benefit humans in all areas. Health, entertainment, the economy, space colonization, you name it. But there might also be some risks, including existential risks associated with creating, bringing into the world, machines that are radically smarter than us.

The Fourth Age

I mean it’s a pretty bold claim when you look at two facts. First, the state of the technology. I don’t have any indication that my smartphone is on a path to sapience, one. And two, the only human-level artificial intelligence we know of is human intelligence. And that is something, coupled with consciousness and the human brain and the mind and all of that, that…

To say we don’t understand it is an exercise in understatement. So how do you take those two things—that we have no precedent for this, and we have no real knowledge of how human intelligence works—and then you come to this conclusion that this is all but inevitable?