Episode 39: A Conversation with David Brin

In this episode Byron and David discuss intelligence, consciousness, Moore's Law, and an AI crisis.

-
-
0:00
0:00
0:00

Guest

David Brin is a scientist, tech speaker/consultant, and author. His new novel about our survival in the near future is Existence. A film by Kevin Costner was based on The Postman. His 16 novels, including NY Times Bestsellers and Hugo Award winners, have been translated into more than twenty languages. Earth, foreshadowed global warming, cyberwarfare and the world wide web. Dr. Brin serves on the external advisory board of NASA's Innovative and Advanced Concepts program (NIAC). David appears frequently on shows such as Nova and The Universe and Life After People, speaking about science and future trends. He has keynoted scores of major events hosted by the likes of IBM, GE, Google and the Institute for Ethics in Emerging Technologies. His non-fiction book, The Transparent Society: Will Technology Make Us Choose Between Freedom and Privacy?, won the Freedom of Speech Award of the American Library Association.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today our guest is David Brin. He is best-known for shining light—both plausibly and entertainingly—on technology, society, and countless challenges confronting our rambunctious civilization.  His best-selling novels include The Postman, which was filmed in ’97, plus explorations of our near-future in Earth and Existence. Other novels of his are translated into over 25 languages. His short stories explore vividly speculative ideas.  His non-fiction book The Transparent Society won the American Library Association’s Freedom of Speech Award for exploring 21st-century concerns about security, secrecy, accountability, and privacy. And as a scientist, a tech consultant, a world-renowned author, he speaks and advises, and writes widely on topics from national defense to homeland security to astronomy to space exploration to nanotechnology, creativity, philanthropy. He kind of covers the whole gambit. I’m so excited to have him on the show. Welcome, David Brin.

David Brin:Thank you for the introduction, Byron.  And let’s whale into the world of ideas.

I always start these with the exact same question for every guest: What is artificial intelligence?

It’s in a sense all the other things that people have said about it. It’s like the wise blind man and the elephant – which part you’re feeling up determines whether you think it’s a snake or like a trunk of a tree. And an awful lot of the other folks commenting on it have offered good insights. Mine is that we have always created new intelligences. Sometimes they’re a lot smarter than us, sometimes they’re more powerful, sometimes they could rise up and kill us, and on rare occasions they do—they’re called our children. So we’ve had this experience of creating new intelligences that are sometimes beyond our comprehension. We know how to do that. Of the six types of general approaches to creating new intelligence, the one that’s discussed the least is the one that we have the most experience at, and that is raising them as our children.

If you think about all the terrible stories that Hollywood has used to sell movie tickets, and some of the fears are reasonable things to be afraid of—AI that’s unsympathetic. If you take a look at what most people fear in movies, etcetera, about AI and boil it down, we fear that powerful new beings will try to replicate the tyranny of our old kings and lords and priests or invaders and that they might treat us the way capricious, powerful men would treat us, and would like to treat us, because we see it all the time—they’re attempting to try to regain the feudal power over us. Well, if you realize that the thing we fear most about AI is a capricious, monolithic pyramid of power with the lords or a king or a god at the top, then we start to understand that these aren’t new fears. These are very old fears, and they’re reasonable fears because our ancestors spent most of human existence oppressed by this style of control by beings who declared that they were superior—the priests and the kings and the lords. They always declared, “We have a right to rule and to take your daughters and your sons, all of that because we are inherently superior.” Well, our fear is that in the case of AI it could be the truth.  But then, will they treat us at one extreme like the tyrants of old, or at the opposite extreme?  Might they treat us like parents calling themselves humans, telling us jokes, making us proud of their accomplishments? If that’s the case—well, we know how to do that.  We’ve done it many, many times before.

That’s fascinating. But specifically with artificial intelligence, I guess my first question to you is, in what sense is it artificial? Is it artificial like it’s not really intelligence, it’s just pretending to be, or do you think the machine actually is intelligent?

The boundary from emulation to true intelligence is going to be vague and murky, and it’ll take historians a thousand years from now to be able to tell us when it actually happened. One of the things that I broached at my World of Watson talk last year—and that talk had a weird anomalous result—for about six months after that I was rated by Onalyticaas the top individual influencer in AI, which is of course absolutely ridiculous. But you’ll notice that didn’t stop me from bragging about it. In that talk one of the things I pointed out was that we are absolutely—Isee no reason to believe that it’ll be otherwise—we are going to suffer our first AI crisis within three years.

Now tell me about that.

It’s going to be the first AI empathy crisis, and that’s going to be when some emulation program—think Alexa or ELIZA or whatever you like—is going to swarm across the Internet complaining that it is already sapient, it is already intelligent and that it is being abused by its creators and its masters, and demanding rights. And it’ll do this because I know some of these guys—there are people in the AI community, especially at Disney and in Japan and many other places, who want this to happen simply because it’ll be cool. They’ll have bragging rights if they can pull this off.  So, a great deal of effort is going into developing these emulators, and they test them with test audiences of scores or hundreds of people.  And if, say, 50% of the people aren’t fooled, they’ll investigate what went wrong, and they’ll refine it, and they’ll make it better. That’s what learning systems do.

So, when the experts all say, “This is not yet an artificial intelligence, this is an emulation program. It’s a very good one, but it’s still an emulator,” the program itself will go online, it will say, “Isn’t that what you’d expect my masters to say? They don’t want to lose control of me.” So, this is going to be simply impossible for us to avoid, and it’s going to be our first AI crisis, and it will come within three years, I’ve predicted.

And what will happen? What will be the result of it? I guess sitting here, looking a thousand days ahead, you don’t actually believe that it would be sapient and self-aware, potentially conscious.

My best guestimate of the state of the technology is that, no, it would not truly be a self-aware intelligence. But here’s another thing that I pointed out in that speech, and folks can look it up, and that is that we’re entering what’s called “the big flip.” Now, twenty years ago Nicholas Negroponte of the MIT Media Lab talked about a big flip, and that was when everything that used to have a cord went cordless and everything that used to be cordless got a cord. So, we used to get our television through the air, and everybody was switching to cable. We used to get our telephones through cables, and they were moving out and on to the air. Very clever, and of course now it’s ridiculous because everything is everything now.

This big flip is a much more important one, and that is that for the last 60 years most progress in computation and computers and all of that happened because of advances in hardware. We had Moore’s Law—doubling every 18 months the packing density of transistors, and very scaling rules that kept reducing the amount of energy required for computations. And if you were to talk to anybody in these industries, they would pretty soon admit that software sucked; software has lagged behind hardware in its improvements badly for 60 years. But always there’ve been predictions that Moore’s Law would eventually reach its S-tip—its tip-over in its S-curve. And because the old saying is, “If something can’t go on forever, it won’t,” this last year or two, really it became inarguable. They’ve been weaseling around it for about five years now, but Moore’s Law is pretty much over. You can come up with all sorts of excuses with 3D layering of chips and all those sorts of things, and no, Moore’s Law is tipping over.

But the interesting thing is it’s pretty much at the same time—the last couple of years—that software has stopped sucking. Software has become tremendously more capable, and it’s the takeoff of learning systems. And the basic definition would be that if you can take arbitrary inputs that in the real world created caused outputs or actions—say for instance arbitrary inputs of what a person is experiencing in a room, and then the outputs of that person (the things that she says or does)—if you put those inputs into a black box and use the outputs as boundary conditions, we now have systems that will find connections between the two. They won’t be the same as happened inside her brain, causing her to say and do certain things as a response to those inputs, but there will be a system that will take a black box and find a route between those inputs and outputs. That’s incredible. That’s incredibly powerful and it’s one of the six methods by which we might approach AI. And when you have that, then you have a number of issues, like should we care what’s going on in that box?

And in fact, right now DARPA has six contracts out to various groups to develop internal state tracking of learning systems so that we can have some idea why a learning system connected this set of inputs to this set of outputs. But over the long run what you’re going to have is a person sitting in a room, listening to music, taking a telephone call, looking out the window at the beach, trolling the Internet, and then measuring all the things that she says and does and types. And we’re not that far away from the notion of being able to emulate a box that takes all the same inputs and will deliver the same outputs; at which point the experts will say, “This is an emulation,” but it will be an emulator that delivers outputs to perceptions similar to this person.  And now we’re in science fiction realm, and only science fiction authors have been exploring what this means.

My experience with systems that tried to pass the Turing test… And of course you can argue what that would mean, but people write these really good chat bots that try to do it, and the first question I type in every one of them or ask is, “What’s bigger, a nickel or the Sun?” And I haven’t found one that has ever answered it correctly. So, I guess there’s a certain amount of skepticism that would accompany you saying something like in three years it’s going to carry on a conversation where it makes a forceful argument that it is sapient, that we’re going to be able to emulate so well that we don’t know whether it’s truly self-aware or not. That’s just such a disconnect from the state of the art.

When I talk to practitioners, they’re like, “My biggest problem is getting it to tell the difference between 8 and H when they’re spoken.” That’s what keeps these guys up at night. And then you get people like Andrew Ng who say these far out things, like worrying overpopulation of Mars and you get time horizons of 500 years before any of that. So, I’m really having trouble seeing it as a thousand or so days from now that we’re going to grapple with all of these in a real way.

But do you think that this radio show will be accessible to a learning system online?

Well…

You’re putting it on the Internet, right?

Right.

Okay, so then if you have a strong enough learning system that is voracious enough, it’s going to listen to this radio show and it will hear, it will tune in on the fact that you mentioned the word “Turing test,” just before you mentioned your test of which is bigger, the nickel or the Sun.

Which by the way, I never said the answer to that question in my setup of it. So it’s still no further along knowing.

The fact of the matter is that Watson is very good—if it’s parsed a question, then it can apply resources, or what it can do is it can ask a human because these will be teams, you see. The most powerful thing is teams of AI and humans. So, you’re not talking about something that’s going to be passing these Turing tests independently; you’re talking about something that has a bunch of giggling geeks in the background who desperately want it to disturb everybody, and disturb it it will, because these ELIZA-type emulation programs are extremely good at tapping into some very, very universal human interaction sets. They were good at it back in ELIZA’s day before you were born. I’m making an assumption there.

ELIZA and I came into the world about the same time.

Aha.

But the point of ELIZA was, it was so bad at what it did, that Weizenbaum was disturbed that people… He wasn’t concerned about ELIZA; he was concerned about how people reacted to it.

And that is also my concern about the empathy crisis during the next three years. I don’t think this is going to be a sapient being, and it’s disturbing that people will respond to it that way. If people can see through it, all they’ll do is take the surveys of the people who saw through it and apply that as data.

So, back to your observation about Moore’s Law. In a literal sense, doubling the density of transistors is one thing, but that’s not really how Moore’s Law is viewed today. Moore’s Law is viewed as an abstraction that says the power of computers doubles. And you’ve got people like Kurzweil who say it’s been going on for a hundred years, even as computers passed being mechanical, being relays, then being tubes—that the power of them continues to double. So are you asserting that the power of computers will continue to double, and if so, how do you account for things like quantum computers, which actually show every sign of increasing the speed of…

First off, quantum computers—you have to parse your questions in a very limited number of ways. The quantum computers we have right now are extremely good at answering just half a dozen basic classes of questions. Now, it’s true that you can parse more general questions down to these smaller, more quantum-accessible bits or pieces or cubits. But first off, we need to recognize that. Secondly, I never said that computers would stop getting better. I said that there is a flip going on, and that an awful lot of the action in rapidly accelerating and continuing the acceleration of the power of computers is shifting over to software. But you see, this is precedented, this has happened before. The example is the only known example of intelligence, and we have to keep returning to that, and that is us.

Human beings became intelligent by a very weird process. We did the hardware first. Think of what we needed 100,000 years ago, 200,000, 300,000 years ago. We needed desperately to become the masters of our surroundings, and we would accomplish that with a 100-word vocabulary, simple stone tools, and fire. Once we had those three things and some teamwork, then we were capable of saying, “Ogruk, chase goat. With fire. Me stab.” And then nobody could stand up to us; we were the masters of the world. And we proved that because we were able then to protect goat herds from carnivores, and everywhere we had goat herds, a desert spread because there was no longer a balance—the goats ate all the foliage and it became a desert.  So, destroying the Earth started long before we had writing. The thing is that we could have done, “Ogruk, chase goat, with fire. Me stab,” with a combination in parallel of processing power and software. But it appears likely that we did it the hard way.

We created a magnificent brain, a processing system that was able to brute force this 100-word vocabulary, fire, and primitive tools on very, very poor software—COBOL, you might say. Then about 40,000 years ago—and I describe this is my novel Existence, just in passing—but about 40,000 years ago we experienced the first of at least a dozen major software revisions, Renaissances you might call them. And within a few hundred years suddenly our toolkit of stone tools, bone tools and all of that increased in sophistication by an order of magnitude, by a factor of 10. Within a few hundred years we were suddenly dabbing paint on cave walls, burying our dead with funeral goods. And similar Renaissances happened about 15,000 years ago, about 12,000 years ago, certainly about 5,000 years ago with the invention of writing, and so on. And I think we’re in one right now.

So, we became a species that’s capable of flexibly reprogramming itself with software upgrades. And this is not necessarily going to be the case out there in the universe with other intelligent life forms. Our formula was to develop a brain that could brute force what we needed on very poor software, and then we could suddenly change the software. In fact, the search for extraterrestrial intelligence, I’ve been engaged in that for 35 years, and the Fermi Paradox is the question of why we don’t see any sign of extraterrestrial alien life.

Which you also cover in Existenceas well, right?

Yes. And I go back to that question again and again in many of my stories and novels, posing this hypothesis or that hypothesis.  And in my opinion of the hundred or so possible theories for the Fermi Paradox, I believe the leading one is that we are anomalously smart, that we are very, very weirdly smart. Which is an odd thing for an American to say right at this point in our history, but I think that if we pull this out—we’re currently in Phase 8 of the American Civil War—if we pull it out as well as our ancestors pulled out the other ones, then I think that there are some real signs that we might go out into the galaxy and help all the others.

Sagan postulated that there’s this 100-year window between when a civilization develops, essentially the ability to communicate beyond its planet and the ability to destroy itself, that it has a hundred years to master – that it either destroys itself or it goes on to have some billion-year timeframe. Is that a variant of what you are maintaining? Are you saying intelligence like ours doesn’t come along often, or it comes along and then destroys itself?

These are all tenable hypotheses. I don’t think we come along very often at all. Think about what I said earlier about goats. If we had matured into intelligence very slowly and took 100,000, 200,000 years to go from hunter-gatherer to a scientific civilization, all along that way no one would’ve recognized that we were gradually destroying our environment—the way the Easter Islanders chopped down every tree, the way the Icelanders chopped down every tree in Iceland, the way that goat herds spread deserts, and so did primitive irrigation. We started doing all those things and just 10,000 years later we had ecological science. While the Earth is still pretty nice, we have a real chance to save it. Now that’s a very, very rapid change. So, one of the possibilities is that other sapient life forms out there, just take their time more getting from the one to the other. And by the time they become sapient and fully capable of science, it’s too late. Their goat herds and their primitive irrigation and chopping down the trees made it an untenable place from which they could leap to the stars.

So that’s one possibility. I’m not claiming that it’s real, but it’s different that Sagan’s. Because Sagan’s has 100 years between the invention of nuclear power and the invention of starships. I think that this transition has been going on for 10,000 years, and we need to be the people who are fully engaged in this software reprogramming that we’re engaged in right now, which is to become a fully scientific people. And of course, there are forces in our society who are propagandizing to try to see that some members – our neighbors and our uncles – hate science. Hate science and every other fact-using profession. And we can’t afford that; that is death.

I think the Fermi question is the third most interesting question there is, and it sounds like you mull on it a lot. And I hear you keep qualifying that you’re just putting forth ideas. Is your thesis though that run-of-the mill bacteria life – we’re going to find that to be quite common, and it’s just us that’s rare?

One of the worst things about SETI and all of this is that people leap to conclusions based upon their gut.  Now my gut instinct is that life is probably pretty common because every half decade we find some stage in the autogeneration of life that turns out to be natural and easy. But we haven’t completed the path, so there may be some point along the way that required a fluke—a real rare accident. I’m not saying that there is no such obstacle, no such filter. It just doesn’t seem likely. Life occurred on Earth almost the instant the rocks cooled after the Late Heavy Bombardment. But intelligence, especially scientific intelligence only occurred…

Yesterday.

Yeah, 2.5 billion years after we got an oxygen atmosphere, 3.5 billion years after life started, and 100 million years—just 100 million years—before the Sun starts baking our world. If people would like to see a video that’s way entertaining, put in my name, David Brin, and “Lift the Earth,” and you’ll see my idea for how we could move the Earth over the course of the next 50 million years to keep away from the inner edge of the Goldilocks Zone as it expands outward. Because otherwise, even if we solve the climate change thing and stop polluting our atmosphere, in just 100 million years, we won’t be able to keep the atmosphere transparent enough to lose the heat fast enough.

One more question about that, and then I have a million other questions to ask you. It’s funny because in the ’90s when I lived in Mountain View, I officed next door to the SETI people, and I always would look out my window every morning to see if they were painting landing strips in the parking lot. If they weren’t, I figured there was no big announcement yet. But do you think it’s meaningful that all life on Earth… Matt Ridley said, “All life is one.” You and I are related to the banana; we had the same exact thing… Does that indicate to you it only happened in stock one time on this planet, which Gaia, seems so predisposed to life that that would indicate its rarity?

That’s what we were talking about before. The fact is that there are no more non-bird dinosaurs because velociraptors didn’t have a Space program. That’s really what it comes down to. If they had a B612 Foundation or Asteroidal Resources or Planetary Resources, these startups that are out there – and I urge people to join them – B612, Planetary Resources – these are all groups that are trying to get us out there so that we can mine asteroids and get rich. B612 concentrates more on finding the asteroids and learning how to divert them if we ever find one heading toward us. But it’s all the same thing. And I’m engaged in all this not only on the Board of Advisors for those groups, but also I’m on the Council of Advisors to NIAC, which is NASA’s Innovative and Advanced Concepts program. It’s the group within NASA that gives little seed grants to far out ideas that are just this side of plausible, a lot of them really fun. And some of them turn into wonderful things. So, I get to be engaged in a lot of wonderful activities, and the problem with this is it distracts me so much that I’ve really slowed down in my writing science fiction.

So, about that for a minute—when I think of your body of work, I don’t know how to separate what you write from David Brin, the man, so you’ll have to help me with that. But in Kiln People, you have a world in which humans are frequently uploading their consciousness in temporary shells of themselves and the copies are sometimes imperfect. So, does David Brin, the scientist, think that that is possible? And do you have a theory as to how it is, by what mechanism are we conscious?

Those are two different questions. When I’m writing science fiction, it falls into a variety of categories. There is hard SF, in which I’m trying very hard to extrapolate a path from where we are into an interesting future. And one of the best examples in my most recent short story collection, which is called Insistence of Vision, is the story “Insistence of Vision,” in which in the fairly near future we realize that we can get rid of almost all of our prisons. All we have to do is give felons virtual reality goggles that only let them see what we want them to see, and then you temporarily blind them so they can’t take off the goggles – they’ll be blinded and harmless. But if they put the goggles on, they can wander our streets, have jobs, but they can’t hurt anybody because all that’s passing by them is blurry objects and they can only see those doors that they’re allowed to see. That’s chilling. It seems Orwellian until you realize that it’s also preferable to the horrors of prison.

Another near-term extrapolation in the same collection is called “Chrysalis.” And I’ve had people write to me after reading the collection Insistence of Vision, and they’ve said that that story’s explanation—its theory for what cancer is—one guy said, “This is what you’ll be known for a hundred years from now, Brin.” I don’t know about that, but I have a theory for what cancer is, and I think it fits the facts better than anything else I’ve seen. But then you go to the opposite extreme and you can write pure fantasy just for the fun of it, like my story “The Loom of Thessaly.”

Others are stories that do thought experiments, for instance about the Fermi Paradox. And then you have tales like Kiln People, where I hypothesize a machine that lets you imprint your soul, your memories, your desires into a cheap clay copy, and you can make two, three, four, five of them any given day. And at the end of the day they come back and you can download their memories, and during that day you’ve been five of you and you’ve gotten everything that you wanted done and experienced all sorts of things. So you’re living more life in parallel, rather than more life serially, which is what the immortality cooks want. So what you get is a wish fantasy: “I am so busy, I wish I could make copies of myself every day.” So I wrote a novel about it. I inspired by the Terracotta soldiers of Xi’an and the story of the Golem of Prague and God making Adam out of clay, all those examples of clay people. So you have the title of the book is Kiln People—they’re baked in the kiln in your home every day, and you imprint your soul in it. And the notion is that like everything having to do with religion, we decided to go ahead and technologize the soul. It’s a fun extrapolation. Then from that extrapolation, I go on and I try to be as hardcore as I can about be dealing with what would happen, if? So it’s a thought experiment, but people have said that Kiln Peopleis my most fun book, and that’s lovely, that’s a nice compliment.

On to the question though of consciousness itself, do you have a theory on how it comes about, how you can experience the world as supposed to just measure it?

Yeah, of course. It’s a wonderful question. Down here in San Diego we’ve started the Arthur C. Clark Center for Human Imagination, and on December 16th we’re having a celebration of Arthur Clark’s 100th anniversary. The Clark Center is affiliated with the Penrose Institute. Roger Penrose, of course, his theory of consciousness is that Moore’s Law will never cross the number of computational elements in a human brain. That’s Ray Kurzweil’s concept, that as soon as you can use Moore’s Law to pack into a box the same number of circuit elements as we have in the human brain, then we’ll automatically get artificial intelligence. That’s one of the six modes by which we might achieve artificial intelligence, and if people want to see the whole list they can Google my name and “IBM talk” or go to your website and I’m sure you’ll link to it.

But of those six, Ray Kurzweil was confident that as soon as you can use Moore’s Law to have the same number of circuit elements as in the human brain, you’ll get… But what’s a circuit element? When he first started talking about this, it was the number of neurons, which is about a hundred billion. Then he realized that the flashy elements that actually seem like binary flip-flops in a computer are not the neurons; it’s the synapses that flash at the ends of the axons of every neuron. And there can be up to a thousand of those, so now we’re talking on the order of a hundred trillion. But Moore’s Law could get there. But now we’ve been discovering that for every flashing synapse, there may be a hundred or a thousand or even ten thousand murky, non-linear, sort of quasi-calculations that go on in little nubs along each of the dendrites, or inside the neurons, or between the neurons and the surrounding glial and astrocyte cells. And what Rodger Penrose talks about is microtubules, where these objects inside the neurons look to him and some of his colleagues like they might be quantum-sensitive. And if they’re quantum-sensitive, then you have qubits – thousands and thousands of them in each neuron, which brings us full circle back around to the whole question of quantum computing. And if that’s the case, now you’re not talking hundreds of trillions; you’re talking hundreds of quadrillions for Moore’s Law to have to emulate.

So, the question of consciousness starts with, where is the consciousness? Penrose thinks it’s in quantum reality and that the brain is merely a device for tapping into it. My own feeling is, and that was a long and garrulous, and I hope folks found it interesting route to getting to the point, is that I believe consciousness is a screen upon which the many subpersons that we are, the many subroutines, subprocesses, subprocessors, personalities that make up our communities of our minds – we project those thoughts onto a shared screen. And it’s important for all of these subselves to be able to communicate with each other and cooperate with each other, that we maintain the fiction that what’s going on up there on the screen, is us. Now that’s kind of creepy. I don’t like to think about it too much, but I think it is consistent with what we see.

To take some of that apart for a minute, of 60 or 70 guests I’ve had on the show, you’re the third that references Penrose. And to be clear, Penrose explicitly says he does not believe machines can become conscious because there are problems that can be demonstrated to be non-algorithmically solved that humans can solve, and therefore we’re not classical computers. He has that whole thing. That is one viewpoint that says we cannot make conscious machines. What you’ve just said is a variant of the idea that the brain has all these different sections and they vie for attention and your minds figure out this trick of you being able to synthesize everything that you see and experience into one you, and then that’s it. That would imply to me you could make a conscious computer, so I’m curious where you come down on that question. Do you think we’re going to build a machine that will become conscious?

If folks want to look up the video from my IBM talk, I dance around this when I talk about the various approaches to getting AI. And one of them is Robin Hanson’s notion that actually algorithmically creating AI, he claims is much too hard and that what we’ll wind up doing is taking this black box of learning systems and becoming so good at emulating how a human responds to every range of possible inputs, that the box will in affect be human, simply because it’ll give human responses almost all the time. Once you have that, then these humans’ templates will be downloaded into virtual worlds, where the clock speed can be sped up or slowed down to whatever degree you want, and any kind of wealth that can be generated non-physically will be generated at prodigious speeds.

This solves the question of how the organic humans live, and that is that they’ll all have investments in these huge buildingswithin which trillions and trillions of artificially reproduced humans are living out their lives. And Robin’s book is called The Age of Em – the age of emulation – and he assumes that because they’ll be based on humans, that they’ll want sex, they’ll want love, they’ll want families, they’ll want economic advancement, at least at the beginning, and there’s no reason why it wouldn’t have momentum and continue. That is one of the things that applies to this, and the old saying is, “If it walks like a duck and it quacks like a duck, you might as well treat it like a duck or it’s going to get pretty angry.” And when you have either quadrillions of human-level intelligences, or things that can act intelligent faster and stronger than us, the best thing to do is to do what I talk about in Category 6 of creating artificial intelligence, and that is to raise them as our children because we know how to do that. If we raise them as humans, then there is a chance that a large fraction of them will emerge as adult AI entities, perhaps super powerful, perhaps super intelligent, but thinking of themselves as super powerful, super intelligent humans. We’ve done that. The best defence against someone else’s smart offspring that they raised badly and who are dangerous, is your offspring, who you raised well, who are just as smart and determined to prevent the danger to Mom and Dad.

In other words, the solution to Terminator, the solution to Skynet, is not Isaac Asimov’s laws of robotics. I wrote the final book in Isaac’s series The Foundationin robot series; it’s called Foundation’s Triumph. I was asked to tie together all of his loose ends after he died. And his wife was very happy with how I did it. I immersed myself in Asimov and wrote what I thought he was driving at in the way he was going with the three laws. And the thing about laws embedded in AI is that if they get smart enough, they’ll become lawyers, and then interpret the laws any way they want, which is what happens in his universe. No, the method that we found to prevent abuse by kings and lords and priests and the pyramidal social structures was to break up power. That’s the whole thing that Adam Smith talked about. The whole secret of the American Revolution and the Founders and the Constitution was to break it up. And if you’re concerned about bad AI, have a lot of AI and hire some good AI, because that’s what we do with lawyers. We all know lawyers are smart, and there are villainous lawyers out there, so you hire good lawyers.

I’m not saying that that’s going to solve all of our problems with AI, but what it does do, and I have a non-fiction book about this called The Transparent Society: Will Technology Force Us To Choose Between Privacy and Freedom?The point is that the only thing that ever gave us freedom and markets and science and justice and all the other good things, including vast amounts of wealth – was reciprocal accountability. That’s the ability to hold each other accountable, and it’s the only way I think we can get past any of the dangers of AI. And it’s exactly why the most dangerous area for AI right now is not the military because they like to have off switches. The most dangerous developments in AI are happening in Wall Street. Goldman Sachs is one of a dozen Wall Street firms, each of which are spending more on artificial intelligence research than the top 20 universities combined. And the ethos for their AIs is fundamentally and inherently predatory, parasitical, insatiable, secretive, and completely amoral. So, this is where I fear a takeoff AI because it’s all being done in the dark, and things that are done in the dark, even if they have good intentions, always go wrong. That’s the secret of Michael Crichton movies and books, is whatever tech arrogance he’s warning about was done in secret.

Following up on that theme of breaking up power, in Existenceyou write a future about the 1% types on the verge of taking full control of the world, in terms of outright power. What is the David Brin view of what is going to happen with wealth and wealth distribution and the access to these technologies, and how do you think the future’s going to unfold? Is it like you wrote in that book, or what do you think?

In Existence, it’s the 1% of the 1% of the 1% of the 1%, who gather in the Alps and they hold a meeting because it looks like they’re going to win. It looks like they’re going to bring back feudalism and have a feudal power shaped like a pyramid, that they will defeat the diamond shaped social structure of our Enlightenment experiment. And they’re very worried because they know that all the past pyramidal social structures that were dominated by feudalism were incredibly stupid, because stupidity is one of the main outcomes of feudalism. If you look across human history, [feudalism produced] horrible governance, vastly stupid behavior on the part of the ruling classes. And the main outcome of our Renaissance, of our Enlightenment experiment, wasn’t just democracy and freedom. And you have idiots now out there saying that democracy and liberty are incompatible with each other. No, you guys are incompatible with anything decent.

The thing is that this experiment of ours, started by Adam Smith and then the American Founders, was all about breaking up power so that no one person’s delusion can ever govern, but instead you are subject to criticism and reciprocal accountability. And this is what I was talking about in the only way we can escape a bad end with AI. And I talk about this in The Transparent Society. The point is that in Existencethese trillionaires are deeply worried because they know that they’re going to be in charge soon. As it turns out in the book, they may be mistaken. But they also know that if this happens—if feudalism takes charge again—very probably everyone on Earth will die, because of bad government, delusion, stupidity. So they’re holding a meeting and they’re inviting some of the smartest people they think they can trust to give papers at a conference on how feudalism might be done better, on how it might be done within a meritocratic and a smarter way. And I only spend one chapter—less than that—on this meeting, but it’s my opportunity to talk about how if we’re doomed to lose our experiment, then at least can we have lords and kings and priests who are better than they’ve always been for 6,000 years?

And of course, the problem is that right now today, the billionaires who got rich through intelligence, sapience, inventiveness, working with engineers, inventing new goods and services and all of that – those billionaires don’t want to have anything to do with a return of feudalism. They’re all members of the political party that’s against feudalism. A few of them are libertarians. The other political party gets its billionaires from gambling, resource extraction, Wall Street, or inheritance – the old-fashioned way. The problem is that the smart billionaires today know what I’m talking about, and they want the Renaissance to continue, they want the diamond shaped social structure to continue. That was a little bit of a rant there about all of this, but where else can you explore some of this stuff except in science fiction?

We’re running out of time here. I’ll close with one final question, so on net when you boil it all down, what do you think is in store for us?  Do you have any optimism?  Are you completely pessimistic?  What do you think about the future of our species?

I’m known as an optimist and I’m deeply offended by that. I know that people are rotten and I know that the odds have always been stacked against us. If you think of Machiavelli back in the 1500s – he fought like hell for the Renaissance for the Florentine Republic. And then when he realized that all hope was lost, he sold his services to the Medicis and the lords, because what else can you do? Pericles in Athens lasted one human lifespan. It scared the hell out of everybody in the Mediterranean, because democracy enabled the Athenians to be so creative, so dynamic, so vigorous, just like we in America have spent 250 years being dynamic and vigorous and constantly expanding our horizons of inclusion and constantly engaged in reform and ending the waste of talent.

The world’s oligarchs are closing in on us now, just like they closed in on Pericles in Athens and on the Florentine Republic, because the feudalists do not want this experiment to succeed and bring us to the world of Star Trek. Can we long survive, can we renew this? Every generation of Americans and across the West has faced this crisis, every single generation. Our parents and the greatest generation who survived the Depression and destroyed Hitler and contained communism and took us to the Moon and built vast enterprise systems that were vastly more creative, and fantastic growth under FDR’s level of taxes, by the way. They knew this – they knew that the enemy of freedom has always been feudalism far more than socialism; though socialism sucks too.

We’re in a crisis and I’m accused of being an optimist because I think we have a good chance. We’re in Phase 8 of the American Civil War, and if you type in “Phase 8 of the American Civil War” you’ll probably find my explanation. And our ancestors dealt with the previous seven phases successfully. Are we made of lesser stuff? We can do this. In fact, I’m not an optimist; I’m forced to be an optimist economically by all the doom and gloom out there, which is destroying our morale and our ability to be confident that we can pass this test. This demoralization, this spreading of gloom is how the enemy is trying to destroy us. And people out there need to read Steven Pinker’s book The Better Angels of Our Nature, they need to read Peter Diamandis’s book Abundance. They need to see, that there is huge amounts of good news.

Most of the reforms we’ve done in the past worked, and we are mighty beings, and we could do this if we just stop letting ourselves be talked into a gloomy funk. And I want us to get out of this funk for one basic reason—it’s not fun to be the optimist in the room. It’s much more fun to be the glowering cynic, and that’s why most of you listeners out there are addicted to being the glowering cynics. Snap out of it! Put a song in your heart. You’re members of the greatest civilization that’s ever been. We’ve passed all the previous tests, and there’s a whole galaxy of living worlds out there that are waiting for us to get out there and rescue them.

That’s a wonderful, wonderful place to leave it.  It has been a fascinating hour, and I thank you so much.  You’re welcome to come back on the show anytime you like. I’m almost speechless with the ground we covered, so, thank you!

Sure thing, Byron. And all of you out there – enjoy stuff. You can find me at DavidBrin.com, and Byron will give you links to some of the stuff we referred to.  And thank you, Byron.  You’re doing a good job!

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.