Episode 60: A Conversation with Robin Hanson

In this episode, Byron and Robin talk about AI and the "Age of Ems," brain emulations.

:: ::


Robin Hanson is an author, research associate at the Future of Humanity Institute of Oxford University, the Chief Scientist at Consensus Point, and an associate professor of Economics at George Mason University.


Byron Reese: This is Voices in AI, brought to you by GigaOm, I'm Byron Reese. Today my guest is Robin Hanson. He is an author, and he is also the Chief Scientist over at Consensus Point. He's an associate professor of Economics at George Mason University. He holds a BS in Physics, an MS in Physics and he's got an NA in conceptual foundations of science from the University of Chicago, he's got a PhD in Social Science from Caltech, and I'm sure there are other ones as well. Welcome to the show Robin.

Robin Hanson: It's great to be here.

I'm really fascinated by your books. Let's start there. Tell me about the new book, what is it called?

My latest book is co-authored with Kevin Simler, and it's called "The Elephant in the Brain: Hidden Motives in Everyday Life," and that subtitle is the key. We are just wrong about why we do lots of things. For most everything we do, we have a story. If I were to stop you at any one moment and ask you, "Why are you doing that?," you'll almost always have a story and you'll be pretty confident about it, and you don't know how that is just wrong a lot. Your stories about why you do things are not that accurate.

So is it the case that we do everything, essentially unconsciously, and then the conscious mind follows along behind it and tries to rationalize, "Oh, I did that because of 'blank,'" and then the brain fools us by switching the order of those two things, is that kind of what you're getting at?

That's part of it yes, your conscious mind is not the king or president of your mind, it's the secretary. It's the creepy guy who stands behind the king saying, "a judicious choice, sir." Your job isn't to know why you do things or to make decisions, your job is to make up good explanations for them.

And there's some really interesting research that bears that out, with split-brain patients and the like. How do we know that about the brain? Tell me a little bit about that.

Well, we know that in many circumstances when people don't actually know why they do things, they still make up confident explanations. So, we know that you're just the sort of creature who will always have a confident story about why you do things, even when you're wrong. Now that by itself doesn't say that you're wrong, it just says that you might well be wrong. In order to show that you are wrong a lot in specific situations, there's really no substitute for looking at the things you do, and trying to come up with a theory about why you do them. And that's what most of our book is about.

So our first third of the book is reviewing all the literature we have on why people might plausibly not be aware of their motives; why it might make sense for evolution to create a creature who isn't aware, who wants to make up another story, but we really can't convince you that you are wrong in detail, unless we go to specific things. So that's why the last two thirds of the book goes over 10 particular areas of life, and then [for] each area of life it says, "Here is your standard story about why you do things, and here are all these details of people's behavior that just don't make much sense from the usual story's point of view.

And then we say: "Here's another theory that makes a lot more sense in the details, that's a better story about why you do things." And isn't it interesting that you're not aware of that, you're not saying that's why you're doing things, you're doing the other thing.

Give me an example.

So, for example, school. If you were to look at people's graduation speech ceremony, speeches or their letters of application or a politician's speech, the usual story about why people go to school, is to learn material, of course. But in fact, learning material just doesn't explain our behaviors very well. People learn a lot of material that isn't very useful and they still get paid more, most of what people learn they don't remember, even what they do remember isn't very useful.

People can get free education if they just sit in on classes, but almost nobody ever does that. People are eager for classes to be cancelled even though they might learn less. People get paid a lot more for the last year of high school and college compared to the other years, even though they don't learn more those last years.

So, these and other datums say this theory that you're going to school to learn the material, just doesn't make sense in a lot of the detail. And we have the alternative story, that's you're going to school to show off. Showing off makes a lot more sense, at least of the details, showing off how smart, how conscientious, conformist you are, maybe you're learning general habits of modern workplace behavior.

These things make sense of the details, and there's of course a question, “Well if this is really why you're really doing things, why don't you know, why don't you admit it?” We have other chapters on medicine, conversation, politics, religion, art, charity, laughter even, body language. In each of these areas, if we were to ask you why you were doing things, you'll have a story and it'll be wrong.

So do cognitive biases play into that in any way, or are you kind of looking at something completely different?

Well, so the cognitive bias literature is usually presented to people as if everybody has all this long list of quirky mistakes we can make. We might be overconfident, we might be angry etc., and it's presented as if there's no particular pattern to all of it, and there's no particular direction. There's just a lot of different ways that your mind could make mistakes, because hey, everything's complicated.

Our book instead is saying, there's this one really big consistent mistake you're making, which is, that you present a good looking view about your motives when your motives are actually darker or lower than you'd like to admit. It's not a random set of strange quirky mistakes, it's one big consistent mistake because of one big consistent reason.

But would that apply to, like when I think of my favorite cognitive bias, it's “rhyme is reason” effect? Things sound more plausible if they rhyme, like... the stitch in time saves nine, or “if it doesn't fit you must acquit.” But I'm not masking anything about me with that, I'm just like... "Ahh okay that makes sense..."

Right that's the sort of bias that people like to think in terms of, it's just a quirky little mistake you make because of some little quirk of our minds and then makes a little sense, but there's no particular larger pattern, you just notice that you might make these little mistakes. But if we say you consistently are wrong about why you go to school and why you go to the doctor and why you vote, these are not little quirky things, they're big things.

Let's assume you're right, but let's also assume further that your brain has a really compelling reason for you to think you're virtuous when you're not, and you're going in there and upsetting that apple cart. So wouldn't it be better if we didn't know the truth, that, [for] whatever reason, evolution has granted us this delusion, and you're in there upsetting that.

Right, so we're not saying that everyone should understand this and become aware of it and acknowledge it. That's definitely not what we're saying. We're saying that [for] people who study human behavior, this is something they need to come to terms with, and we're saying that people who study policy and recommend policies need to come to terms with it.

Most people who do education policy presume that the point of school is to learn the material. And they come up with many reforms and show that there are better ways to learn the material than we've been using, and people are just not interested in adopting those things. And the key question is, "Well why don't we adopt those things?" If they would in fact help us learn the material better, and the claim is that we really know at some level that's not why we're going to school, and so we're really not interested.

So, if you're going to be in policy world, then you definitely need to understand these things, otherwise, you'll just be wrong. And in addition, ordinary people can sometimes benefit from these things. So, if you're in a situation that evolution anticipated well, that is, the evolved intuitions you have, are pretty much the right intuitions to be using, because the world you're in isn't that different from what evolution anticipated, then, from the point of view of what evolution wants for you, then you're probably better off just going with the ignorance that evolution gave you. But the world has changed in many ways, and sometimes you need to consciously analyze your world, [and] wonder whether your behavior should be changing as well. You might be a manager; you might be a sales person. Someone for whom really understanding these things is important.

All right and the name of that one is 'The Elephant in the Brain.'

…'Hidden Motives in Everyday Life.'

And then you have another book coming out in paperback. What's that about?

The book came out June 2016, it's called 'The Age of Em: Work, Love and Life When Robots Rule the Earth.' The paperback version is about to come out now in the United States in about a month, and the subject is: What happens when robots rule the earth? Now, of course you've probably heard lots of science fiction and dramatic stories about this, but I'm analyzing here what would actually happen, not what would make a dramatic story, but what would actually happen, under a particular kind of robot scenario.

So, define a couple of terms: What are robots in that thinking, and what does 'rule the earth', mean?

So, the general category of interesting scenarios is where we eventually have machines that are so capable, that they're cheaper and more effective than humans, on pretty much all jobs. Humans have to retire and machines do the jobs. That's like a very common concerning scenario. And we can break that scenario into sub-scenarios based on what kind of machines there are.

So, today, in the last few decades, we've been building a certain kind of machine, a certain kind of robot, based on writing software or perhaps machine-learning algorithms, and that could be how we eventually make robots as smart as people, but there are a number of other scenarios, and one of them, is called “brain emulation.” In the brain emulation scenario, we just take the software that's currently in human brains, and we copy that software over into artificial hardware, and we get it running there, without understanding it, without necessarily knowing how it works, just by copying it.

And is that like what they're trying to do in Europe with the human brain initiative?

Well that's relevant and related, but they are not trying to copy a whole brain and make it run, they are more trying to understand a brain, by making models of the brain. So in order to copy the brain over and build it, we will need to understand it better than we do now. So the three things we're going to need is: lots of cheap, fast, parallel computers, we're going to need to scan individual brains and find enough chemical and spatial detail, and then we're going to need models of how each kind of cell works, so we can put that all together and do a model of the whole brain. None of these technologies have been enough yet, and the other various research projects people are working on will contribute to advancing the state of the art, especially in that last element of modelling the brain cells.

And do you cover C. elegans, the nematode worm, and the open worm project?

So my book is not about how we might achieve brain emulation, what would happen to the results... All these people talking about brain emulation, and people have talked about the technology and the timing, philosophy of identity, and philosophy of consciousness, and I've always thought they've neglected [to ask] “yeah, but what would actually happen?” How does the world actually change?' And so that's what my book is about.

I got you. That's fascinating, and to be clear, do you believe that people are machines?

Well it depends what you mean by a machine, but we are physical, we are definitely physical creatures. We follow physical processes and physical processes can be—

Fair enough. So everything inside of a human can be explained fundamentally with physics and chemistry and...

Well not just can be explained, it is physics, I mean, you are built out of physical devices, physical processes...

But then I'm curious in what sense we're not robots and therefore robots already rule the world?

Well it's just more a matter of artificially constructed. So, as you know, we have, in the past, often had things that were done naturally, and then we found artificial substitutes.

So, what happened, I mean don't give away the 'whodunit' in the book or anything...

No, I'm happy to tell you about all the details.

If robots ruled the world...

So the biological humans lose their ability to work for money, because the robots are better, and then the humans retire. And now they start out owning all the capital in the world, they own the real estate, and the stock and things like that, and so, this new economy, first of all starts to grow really quickly, much [more] quickly. So our economy today doubles roughly every 15 years. And this new robot-based economy might double every month, and that means, human investments in this might double every month, so, people who had a share in this could get really rich really fast, so they could have a rich retirement.

But most of the work is being done by the robots, and they're running really fast and they're really efficient, and they're really effective, and so, even if humans are on the side, sort of setting the overall priorities by owning things or controlling things, they really can’t be setting the detailed priorities, they can't be managing the details. So in the very least in that sense, the robots are ruling the earth, they are running things, they are making all the detailed choices, because they are on the spot and they understand what's going on, and the humans on the side really don't.

Well you know it's kind of a really interesting question, because I'm back to the, “Are people machines?” question, because when we poll our readership, which is, technical professionals, [people] generally in technical roles, or in technical companies, and when we ask them, which we recently did, we ask questions that try to get at that issue... “Is consciousness uniquely human, can consciousness be reproduced in machines... are there things that happen in your body that cannot be explained with physics in an equation?” an overwhelming majority of people reject the notion that humans are machines. Yet, when I think of the guests on this show, I would say the overwhelming majority embrace that idea.

I mean a lot depends on what you mean by a machine, the association of machine. So to the extent you're a machine, you're a vastly more complicated machine than humans have ever built.

Well I guess the setup is, we have these brains [and] we don't understand how they work, we don't understand the physics of a neuron. A neuron could be as complicated as a super computer, [but] we don't know how a thought's encoded and all that. Then, on top of that we have these minds, and a mind is all this stuff your brain can do that seems, and I stress seems, something that an organ shouldn't be able to do, like your liver doesn't have a sense of humor presumably, but, you somehow do.

And then we have consciousness, this thing we all agree on what it is: that is, your experience of the world. And yet we can't even phrase the question scientifically about how is it that matter has subjective experience and we don't know what the answer would look like. So then you look at those three things and you say, “But, we believe completely that it's theoretically possible to build mechanical people.” Where in that do you see the breakdown?

Well, like I said, there's this concept of brain emulation and that concept is the idea that you have a model that has the same input-output relations as the original. That is we have cells in a brain, they take signals in, they change state, they send signals out, and then we're going to make a model of that which has the same sort of input-output relations, that takes signals in, changes states, and signals out.

And the key assumption here is that we're eventually able to make a model that has the same overall signal behavior, signals in and signals going out. Now you could say that's never going to be possible, that'll just be completely impossible. But the whole starting point of my analysis is to say, “What if it is possible, what happens next?” Like I said, for decades, when the subject comes up, people really get obsessed with talking about the philosophy questions of: “Is it even possible?” “If you made one, is it me?” “If you made one, is it conscious?”

And I say these are all fascinating questions. I'm not going to ignore, because I always thought it was this neglected question, like, okay but what would actually happen? People get so stuck on the philosophy, it's not like nobody should ever talk about philosophy, but I think somebody should sometimes talk about something else.

I agree, but I would assume you have to start by saying, “Are they going to be as smart as [we are] or 1,000 times smarter, and is it going to happen gradually, or is it going to be incredibly expensive to make a single one?”

So those are things I can figure out, because I don't need to answer philosophy questions to figure those things out.

Right but that is kind of a technique question: how would we do it; how expensive would it be?

Absolutely, so clearly it would be at one point too expensive, and so it would just be a thing that sometimes people did because it was interesting and cute and they were researching it, but eventually it would become cheap enough that you eventually would do [it] a lot. So “The Age of Em” doesn't really start until it's cheap enough that you can really just replace people wholesale with them. Before then, it's a possibility, you're near the age of Em, because they're getting cheaper, but it's not the age of Em yet, because they're not cheap. Once they are cheap, then there's many questions we can ask.

So one thing is, apparently it'll probably be easier and cheaper early on to do destructive scans of brains, and so that means the first people who become Ems will have to make a one-way choice. They'll have to agree to have their brains destructively scanned in order to create an Em, and then there'll be this Em of them afterwards and then they won't exist anymore, because their brains will have been destructively scanned, so that’s the early transition which of course is problematic.

We can also say that early on, [in] the very beginning you would be scanning adult people who are at the peak of their careers: lawyers, software engineers, etc., because those would be the people you could sell—to rent them as Ems, but then quickly the world will change to wanting younger humans to scan, because they will be flexible and more better able to learn the new different jobs skills required of this new world. There may well be a period of overlap there where the Em will be demanding children to scan, but the scans will be destructive, and you can imagine a lot of resistance to that.

What's the Em in that?

So Em is short for emulation. It's just a short name, because I have to use [it] at least once a sentence throughout the entire book, and I didn't want a long awkward phrase. So if it's possible to make brain emulations, then they exist and when they're cheap enough, there'll be a lot of them and I call them an Em, and they would in fact be smarter and more capable than humans for two key reasons. One is that you can run them at different speeds, anytime you have a computer program you can run them on a faster computer and run it faster. I estimate they would typically run about 1,000 times human speed, so they could think 1,000 circles around you, for every time you could think one thing, they could think 1,000 things.

In addition, there's a huge selection effect where the Em world is dominated by the few hundred most productive humans. So, we take all the 7 billion humans, we limit ourselves to the ones who are willing to become scanned and become Ems, and then among those, they compete to see who is in the most demand. And the competition is fierce because whoever is in the most demand for any particular job, be it a lawyer or a software engineer, the Em economy can make millions or billions of copies of that one, to do all the jobs.

It's like today in music, we have fierce music competition because any one musician can make enough copies of their CD or music file, that everybody could be listening to their music if they wanted to. The ability to copy the music creates fierce competition in a big 'winner takes all' effect, where the best few musicians get most of the attention.

Now in the labor market, the same thing can happen. The best few lawyers, and the best few software engineers can dominate those labor markets, so that means most emulations are actually copies of the few hundred most productive humans, so that makes them elite compared to the typical human. They're like billionaires, heads of state, Olympic gold medalists, Nobel prize winners, the typical emulation is that good compared to the typical human.

So, the show's called Voices in AI, so let's talk about artificial intelligence. You have a few contrarian views about AI. Can you throw one of those out there and let's take that ball and run with it?

Sure now just to be clear, Ems are a kind of AI in the sense that they are an artificial machine-based intelligence, they’re just through a different process, and that's the process of copying the software that's in our brain. So far we usually think about AI, for example, as just more about your software. So, for the last 70 years, the human economy has been making a lot of software, and most of you use a lot of software, it's on your phones, your computers etc. And we've been making that software [gradually] better and more effective, and so another route to eventually achieving AI, is just to continue doing that, going on for even maybe centuries to produce more and better AI.

There's other scenarios people have in mind, where eventually we get some sort of revolution where some new approach to software takes over and makes a huge difference. So my overall opinions about AI, are that the record so far has been relatively steady, that is over the last 7 years we've been making slow and steady progress, but at the rate of progress we've seen for the last 7 years, it's going to take centuries.

Yes, you're a contrarian on a few things, talk about that.

So, we've seen this over and over again, if you look at the history of these things, we see every decade or two another big burst of interest and concern about AI and automation and a lot of people saying, “We're now near the final point.” And the big question is, is this time different? In all the past times, there were people who said, “This time is different,” too. And usually what happens is there's some cool demos or some new product, there's some new systems that did things that no system previously could do.

And of course, these new systems are very impressive. The key question is: are they showing that we're almost about to turn the corner—we're going to be able to have machines that do almost everything—or are we still a really long way away, like we have been every time before?

Well what do you make of certain high profile individuals, who are in the tech industry, I mean you know the list, it's Elon Musk, Stephen Hawking, Bill Gates, Wozniak, there's a lot of people in the tech field, who, you know, suggest it’s 5-10 years away.

I don't have any good reason to think we don't have enough hardware now, but I also see that in a lot of areas of computer science and software, rates of software progress have tended to track rates of hardware progress. The best simple explanation for that is we've usually needed more hardware in order to figure out how to do software better.

That is, when we have more hardware, we can try out more experiments, we can try out bigger experiments, we can try new algorithms that we couldn't try before. And that ability to try new things is an important element in our learning which kinds of software work better. Eventually when we know how to do the best kind of software, it may well be that current machines are good enough to do that, but that doesn't mean we won't need a lot more hardware than we have now to figure out what the right way to organize the software is.

Do you have an intuitive sense, that if we "knew how to do it," we have hardware adequate enough to emulate the capability of the human brain?


Now, you know when the science of artificial intelligence was first created, there was hope that in a summer a few people could "solve" it. And the hope was based on the idea that like in physics and with magnetism and electricity a few simple rules were discovered that could explain everything, Newton's three Law's and all that.

Yeah, that is a nice clean hope.

Right, and a reasonable one. And then, Minsky later said, no it's really a hundred different things your brain knows how to do. You're just a bunch of spaghetti code with these hundred hacks and it's going to be a long slog. And then you get books that come out, called, like The Master Algorithm, that posit that there's a simple, maybe not simple, but there's a single key to it. What's your take on that?

So the way I'd have you think about this is as a distribution of the lumpiness of innovation. So, in all areas there's a distribution of lumpiness, that is, there’s a few really big lumps, and then lots and lots of little lumps. And probably there's a power law that describes this. In fact, this is true of citations. There is a power law that describes the distributions of citations. So, when people have academic papers, the ones that are bigger deals, they cracked more citations, and there's a small number of papers that have a lot of citations, and then there's a lot of papers that have hardly any.

And so, the key question is that lumpiness, the shape of that distribution. So, in some areas that might be that most progress is in a few big lumps. And in other areas most progress is in lots of little lumps, and that may well vary from topic to topic. Although it turns out that for citations it doesn’t vary at all, that is physics and biology and medicine and they all seem to have pretty much the same lumpiness of citations, even if perhaps they don't have quite the same lumpiness of progress.

We can look in the past and see the lumpiness of progress in something like computer science or AI, and we can see that that lumpiness has actually been pretty consistent. I mean, the few biggest lumps aren't consistent; the few biggest lumps come along lumply, right? There's a while and there hasn't been a lump and then suddenly another big lump, and then another while and no big lumps—they don't come steadily, they come randomly. But if we look at the rest of the distribution, we see that there's relatively consistent distribution with lots of small lumps and a few big lumps.

And what we know about innovation in general is that, averaging over all the different fields we've ever studied, most innovation is in the small lumps. The few big lumps are dramatic and they can make for Nobel prizes and great stories, but most progress is in all those little lumps. And that certainly seems to be true in machine learning and AI and computer science as well.

And I don't actually see any particular evidence that the lumpiness of the distribution has changed recently or that I should expect it to be different in the future. Every once in a while there's a lump, and we can say, “Look there's another big lump,” but that's different than saying that the lumpiness of the distribution has changed.

So recently people have pointed to Alpha Go as their best example of a big lump, and say, “Wow, in one system you made this big dramatic leap past previous systems.” I'm happy to call that that a lump, and I might say, “Let's just look at how often lumps like that have shown up over the last 70 years,” and I'd say, “It seems like the rate of lumps hasn't changed.”

But I guess the thesis is that there's a watershed lump, a tipping point, the lump beyond which everything's…well I guess there doesn’t have to be one, right? It's kind of like the freezing point of water. Water doesn't change a lot, it goes down a degree every hour, didn't really change that much between 35 and 34, 34 to 33... and then all of a sudden it happens. Or like with solar panels, when solar energy just gets a micro cent cheaper than generated [electricity]. Like the whole world changes.

The really huge enormous lump.

So you don't think that the next 20 years or 30 years or 40 years is going to bring some... You think it's going to be as lumpy—the effect of automation on employment and all of the other things—going forward as it has been, and unemployment will remain in that [same general range]?

So we understand that there are sometimes threshold effects. That was true from my Age of Ems scenario, that is, when Ems are expensive they don't make much difference and when they get past the threshold of being cheap enough, they make an enormous difference. So we can certainly understand some threshold effects where we see things happen, and honestly that makes sense of a lot of job automation.

So, you can think of automating truck driving or automating a travel agent as something where the technology is expensive and its gets cheaper and cheaper, and mostly that hardly matters because it's still too expensive and then at some point it gets cheap enough to replace that kind of job. But that sort of replacement happens across the job—like truck driving or travel agents—but it doesn’t happen across the whole economy.

If we think about these different thresholds as being distributed, it looks like they're more distributed and roughly along normal distribution, that is, we've seen many many decades, if not centuries of automation slowly getting better and then replacing particular jobs at a point where the automation passes the threshold for that job, but the position of those thresholds varies enormously across jobs.

Centuries ago there were jobs that were replaced that were really easy to replace and they still had a lot of jobs that were really hard to replace with respect to current ones. So we don't see any particular trend in the number of jobs that are getting displaced. It's been a pretty steady rate over a while. Any one job has a threshold effect where all the jobs of that type get replaced all at once, but in the whole economy we haven't seen huge sections of the economy replaced.

Well is it fair to say that we're seeing a dramatic change in the number of businesses that are using artificial intelligence to try to solve business problems or not?

Yeah. I don't see anything recently changing relative to the long-term trends. Now that doesn't mean something couldn't change, I mean there's no guarantee that past trends will continue. We should be ready for the possibility of change and think of how to prepare for that. But that's different from saying we're seeing a dramatic change.

We're seeing another burst of interest and attention like the ones we saw before. So I was caught up in a previous burst you see, this is personal for me. I was a graduate student in 1983, and I read about cool things happening in AI, and so I left my graduate program to go out to Silicon Valley to get involved in AI, because I bought the hype at the time.

People were saying, “We're almost there, it's just about to revolutionize everything.” And I bought into that hype. But of course it wasn't, but there was, back then, a burst of attention and interest in AI, it was all over the news, lots of articles, lots of companies getting involved. It was also in the early '60s, there was a big burst of interest and attention in the topic. Every few decades, we've seen this huge burst of interest and concern and attention, going way way back.

And so you wouldn't be surprised to see another AI winter where everybody's like, “Ah, that was just another false start.”

Not only wouldn’t I be surprised, that would be my most likely prediction.

And based on the rate of progress you think we could be centuries away from…

Right. Along the path of making better software, it looks like we're probably centuries away. So one of the ways I've tried to estimate this is, I've asked AI researchers in various fields, “How far have we come in your field in the last 20 years as a percentage of the distance we have to go to human level abilities?” And the typical answer I get there is 5-10% of the distance, with no noticeable acceleration. And at that rate you're talking two to four centuries before the typical field reached human-level ability.

And you're saying... Oh for the typical field, so you're saying some fields we already have human level ability, like a calculator, right?

Exactly, and some fields were much farther away, some fields were less than 1% of the way here, or even smaller.

So if you were advising a business that has been hearing all of this stuff about AI, and they're saying, “If you don't act now, your competitors will adopt this.” It's the ability to make better decisions. So you would encourage prudence, but what would you say what would be your piece of advice?

Honestly, if you're making major business decisions on the basis of these hyped press cycles, you're already in a losing position, I’m sorry. As almost any business person anywhere should be doing, you should be noticing various opportunities that might be showing up and looking to see whether you can see an edge using it, but holding it to the standard: “Can you actually see something that works?” And so far we actually haven't had very much, so, you have to admit that at the moment, there's relatively few AI applications that have produced substantial revenue from customers.

Well I guess a lot of that would boil down to your definition of AI. To be honest, you would have to say—

Well let's say, techniques developed in the last 10 years have not produced substantial revenue, obviously there's techniques that go way back and then you can argue whether that's just software or just general statistics or whatever.

Well, you could argue that, as all AI, is even in theory, I mean even an AGI is, I would say, according to you, software and general statistics, right?

Well that's at least one way to think about it. I mean, again, that becomes the key interesting question about the future, is when we eventually have robots that are as smart as humans, what kind of software will that be made out of? So a conservative scenario is to say, well it's going to be made out of the same kind of software we've basically seen for the last 7 years, just more and better of it.

My Em scenario says: “No it'll be based out of brain emulations,” which is something we're not at all using in the economy, but that eventually it'll be cheap enough to use those. And people have other scenarios they imagine of new kinds of software that might appear.

So, flashing back to your 1983 situation, if someone who was in a graduate program right now came to you and said, "I'm thinking of leaving my program to study machine learning," what would you say?

Well I have been asked that and I have said, “Buy low, sell high.” Look, these fashions come in waves and just like in the stock market, when your cab driver starts to recommend stocks, it's time to get out of the market, not in. When something's really reached a huge burst and crescendo of attention and press, that probably can't be sustained, that's when it's probably going to decline. It's the wrong time to get in. You want to get in on something before everybody else knows about it and finds it to be the next new fashion. Waiting until everybody’s heard about it is exactly when you're near the end of the fashion cycle.

So it sounds like you're saying, "Don't do it, it's a bubble that's about to burst."

These things have consistently gone in waves for a long time, so you've kind of got to expect the same wave will happen here. If you were getting in early on this wave, then I might say great, good for you. If you're at the end of the wave, then that's again, that's like in the stock market, we've had a record rise in stocks, this is exactly the wrong time to suddenly jump with both feet into the stocks.

So someone could say, "Well, artificial intelligence goes through, as you say, different phases, and sometimes we call it big data, and sometimes we call it other things, but one consistent attribute of the market for programmers is the perennial shortage, that the number of things that we could do with computers…”

Sure, exactly. The costs of jumping on a fad here are much lower than the cost of jumping on some other fad, of course. If you go into machine learning and the fad disappears, you'll still collect a bunch of skills that will be technically useful, and you'll still be in good demand. You'll know math, you'll know computers, you'll have learned a bunch of statistics, those are all fine skills and you'll do fine in the world.

What were you studying in '83, when you jumped ship?

Well I started studying, in grad school, in philosophy of science after having an undergraduate in physics, and then I switched back to physics. So I was completing a masters in physics, and I could have gone on for a PhD in physics but I was—

But AI in '83 has served you well to this day, right?

Well again, I went into software, and I learned a lot in software and I could do other things in software, and I might have kept my career in software, that wouldn’t have been a terrible thing. So yes, something that makes you jump onto the software bandwagon, even under false pretenses, still you'll do fine because there's a huge demand in the world for software, and math-orientated software, and you'll learn math and statistics. Those are again all wonderful skills that you can use in a lot of different ways. Your life will go fine.

So what do you make of things like GDPR, the right to know why a computer made a decision, or all the talk about developing ethics for AI?

Yeah, well, I think it's holding, again, people want to just jump on the AI bandwagon to talk about whatever else it is they're concerned about. Which is fine, but you have to be realistic about these things.

So, at the moment we have this thing called a credit score, okay, you have a credit score, and it matters a lot in your life, and there is not transparency with your credit score. They don't show you the formula that produces your credit score, even though it matters enormously in your life. There's no explanation of your credit score. There's no tracing of how your recent purchase changed your credit score, and yet credit scores are allowed to exist and continue in our world.

There's a lot of these things in our world where there are systems that produce numbers and estimates about people that they don't have transparency of. They don't even have an ability to correct, and yet these things persist. Given all that, I don't know why it makes sense to hold AI to some new higher standard that we haven't held previous systems to.

And what about the debates around the use of these technologies in war, specifically to make kill decisions?

Again, I think this is indirectly a way to complain about war, which I'm happy to complain about war, but the key question is, are these new technologies substantially different from the old ones? In war, people get hurt and technology helps you to hurt people, and you can say that's a shame, but in a competition between different militaries, you kind of want your military to also have the best technology, and part of technology in war is some degree of indirection and delegation in making choices, there's just no way around that, and the people creating the war machines will try to create the versions of them that they think will achieve their purposes well. And, of course there's always going to be a lot of collateral damage when you're using war machines in war. So I don't necessarily see how these new applications of technology to war are substantially different from the old ones.

Would you consider a self-driving car to be AI?

Well of course this is the old dispute, people long noted and correctly so, that things that were considered AI when we couldn't do them, when we finally could, just became software and technology.

Right, I guess I'm trying to rise to your challenge of something that’s going to make a bunch of money based on new techniques.

Right, well self-driving cars are one of the best candidates in the near future, but, even within the next 20 years, if we managed to replace all truck drivers with automated truck drivers, that will still be within the range of the kind of job displacement we've seen over the last century, it won't be a deviation from the previous trends. That right there would just be part of the usual trends.

Well, I would say that's all very provocative on the centuries timescale, right?

But I still think AI is coming eventually, and we should think about what will happen then.


Well the Ems might happen sooner, so my projection for the Em technology is in roughly a century or so or sooner, so that's why I think Ems are an especially interesting scenario, because I think it's likely that we will have Ems before we have ordinary software that's as good as humans.

But it's still not right around the corner. We have time to think about it, but we should still… Think about flood insurance, the time to buy flood insurance is before it starts raining really heavy, that's the time to set up your flood insurance. And so because we're not just on the precipice of AI, this is a good time to think about how to insure ourselves against the risks that AI might offer.

Fair enough, so give us [information on] how people can follow you. The name of the two books that you currently have, the new one that's coming out and the older one that's available on paperback, and then how can people follow you and keep up with your view of the world?

Well, I'm Robin Hanson, I have a Twitter account, and I have a blog called “Overcoming Bias.” I'll tweet the main posts there, and I have a website: www.hanson.gmu.edu with lots of my essays from many years, and I have these two books that came out recently, the most recent one is The Elephant in the Brain, and has a website called: www.elephantinthebrain.com, about hidden motives, and then there's the older book that's just about to come out in paperback called The Age of Em, and there's a website, www.ageofem.com, for more detail there.

Thank you it has been an incredibly fascinatingnear hour and you're welcome to come back any time you like.

I'd love to.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.