Episode 2: A Conversation with Oren Etzioni

In this episode, Byron and Oren talk about AGI, Aristo, the future of work, conscious machines, and Alexa.

-
-
0:00
0:00
0:00

Guest

Oren Etzioni is a professor of Computer Science and CEO of the Allen Institue for Artificial Intelligence. He is also a venture partner at the Madrona Venture Group.

Transcript

Byron Reese: This is Voices in AI, brought to you by Gigaom. I’m Byron Reese. Today, our guest is Oren Etzioni. He’s a professor of computer science who founded and ran University of Washington’s Turing Center. And since 2013, he’s been the CEO of the Allen Institute for Artificial Intelligence. The Institute investigates problems in data mining, natural language processing, and the semantic web. And if all of that weren’t enough to keep a person busy, he’s also a venture partner at the Madrona Venture Group. Business Insider called him, quote: “The most successful entrepreneur you’ve never heard of.”

Welcome to the show, Oren.          

Oren Etzioni: Thank you, and thanks for the kind introduction. I think the key emphasis there would be, “you’ve never heard of.”

Well, I’ve heard of you, and I’ve followed your work and the Allen Institute’s as well. And let’s start, if that’s Okay, let’s start there. You’re doing some fascinating things. So if you would just start off by telling us a bit about the Allen Institute, and then I would love to go through the four projects that you feature prominently on the website. And just talk about each one; they’re all really interesting.

Well, thanks. I’d love to. The Allen Institute for AI is really Paul Allen’s brainchild. He’s had a passion for AI for decades, and he’s founded a series of institutes—scientific institutes—in Seattle, which were modeled after the Allen Institute for Brain Science, which has been very successful running since 2003. We were founded—got started—in 2013. We were launched as a nonprofit on January 1, 2014, and it’s a great honor to serve as CEO. Our mission is AI for the common good, and as you mentioned, we have four projects that I’m really excited about.

Our first project is the Aristo project, and that’s about building a computer program that’s able to answer science questions of the sort that we would ask a fourth grader, and now we’re also working with eighth-grade science. And people sometimes ask me, “Well, gosh, why do you want to do that? Are you trying to put 10-year-olds out of work?” And the answer is, of course not.

We really want to use that test—science test questions—as a benchmark for how well are we doing in intelligence, right? We see tremendous success in computer programs like AlphaGo, beating the world champion in Go. And we say, “Well, how does that translate to language—and particularly to understanding language—and understanding diagrams, understanding science?”

And one way to answer that question is to, kind of, level the playing field with, “Let’s ask machines and people the same questions.” And so we started with these science tests, and we can see that, in fact, people do much better. It turns out, paradoxically, that things that are relatively easy for people are really quite hard for machines, and things that are hard for people—like playing Go at world championship level—those are actually relatively easy for the machine.

Hold on there a minute: I want to take a moment and really dissect this. I’ve noticed that, you know, I have my standard question that—any time there’s a candidate chatbot that can make a go at the Turing test—I start with the same question, and none of them ever answered correctly.

It’s a question a four-year-old could answer, which is, “Which is bigger, a nickel or the sun?” So why is that a hard problem? Is what you’re doing, would it be able to answer that? And why would you start with a fourth grader instead of a four-year-old, like really go back to the most basic, basic questions? So the first part of that is: Is what you’re doing, would it be able to answer the question?

Certainly our goal is to give it the background knowledge and understanding ability to be able to answer those types of questions, which combine both basic knowledge, basic reasoning, and enough understanding of language to know that, when you say “a nickel,” you’re not referring to the metal, but you’re referring to a particular coin, with a particular size, and so on.

The reason that’s so hard for the machine is that it’s part of what’s called ‘common sense’ knowledge, right? Of course, the machine, if you programmed it, could answer that particular question—but that’s a stand-in for literally billions of other questions that you could ask about relative sizes, about animal behavior, about the properties of paper versus feathers versus furniture.

There’s really a seemingly infinite—or certainly a very, very large number—of basic questions that people, that certainly eight-year-olds can answer, or four-year-olds, but that machines struggle with. And they struggle with it because, what’s their basis for answering the questions? How would they acquire all that knowledge?

Now, to say, “Well, gosh, why don’t we build a four-year-old, or maybe even a one-year-old?” I’ve actually thought about that. So at the university, we investigated for a summer, trying to follow the developmental ladder, saying: “Let’s start with a six-month-old, and a one-year-old, etc., etc.”

And my interest, in particular, is in language. So I said, “Well, gosh, surely we can build something that can say ‘dada’ or ‘mama’, right?” And then work our way from there. What we found is that, even a very young child, their ability to process language and understand the world around them is so involved with their body—with their gaze, with their understanding of people’s facial expressions—that the net effect was that we could not build a one-year-old.

So, in a funny way, once you’re getting to the level of a fourth grader, who’s reading and answering multiple choice science questions, it gets easier and it gets more focused on language and semantics, and less on having a body, being able to crawl—which, of course, are challenging robotics problems.

So, we chose to start higher up in the ladder, and it was kind of a Goldilocks thing, right? It was more language-focused and, in a funny way, easier than doing a one-year-old, or a four-year-old. And—at the same time—not as hard as, say, college-level biology questions or AP questions, which involve very complicated language and reasoning.

So it’s your thinking that by talking about school science examinations, in particular… That you have a really, really narrow vocabulary that you have to master, a really narrow set of objects you have to understand the property of, is that the idea? Like, AI does well at games because they’re constrained worlds with fixed rules. Are you trying to build that, an analog to that?

It is an analog, right? In the sense that AI has done well with having narrow tasks and, you know, limited domains. At the same time, it’s probably not the word, really. So, from my point of view… There are, if you look—and this is something that we’ve learned—at the tremendous variety in these questions, and not only variety of ways of saying things, but also variety because these tests often require you to take something that you could have an understanding of—like gravity or photosynthesis—but then apply it to a particular situation.

“What happens if we take a plant and move it nearer to the window?” So that combination means that the combination of basic scientific knowledge, with an application to a real-world situation, means that it’s really quite varied. And it’s really a much harder AI problem to answer fourth-grade science questions than it is to solve Go.

I completely get that, and so I’m going to ask you a question. And it’s going to sound like I’m changing the topic, but it is germane. Do you believe that we’re on a path to building an AGI—a general intelligence? And is this… In fact, it’s like, you’re going to learn things doing this—where all we will need to do, maybe, is scale them up more and more, faster, faster, better and better, and you’ll have an AGI? Is this on that trajectory, or is an AGI something completely unrelated to what you’re trying to do here?

That’s a very, very key question. And I would say that we are not on a path to building an AGI in the sense that, if you build Aristo, and then you scale it to twelfth grade, and more complex vocabulary, and more complex reasoning… And “Hey, if we just keep scaling this further, we’ll end up with artificial general intelligence, with an AGI.” I don’t think that’s the case.

I think there are many other problems that we have to solve, and this is a part of a very complex picture. And if it’s a path, it’s a very meandering one. But really, the point is that the word ‘research’, which is obviously what we’re doing here, has the word ‘search’ in it. And that means that we’re iterating, we’re going here, we’re going there, we’re looking, you know.

“Oh, where did I put my keys?” Right? How many times do you retrace your steps and open that drawer, and say, “Oh, but I forgot to look under the socks,” or “I forgot to look under the bed.” It’s this very complex, uncertain process; it’s quite the opposite of, “Oh, I’m going down the path, the goal is clear, and I just have to go uphill for five miles, and I’ll get there.”

I’ve got a book on AI coming out towards the end of this year, and in it, I talk about the Turing test. And I talk about, like, the hardest question I can think of to ask a computer so that I could detect if it’s a computer or a person. And here’s a variant of what I came up with, which is:

“Doctor Smith is eating at his favorite restaurant, that he eats at frequently. He gets a call, an emergency call, and he runs out without paying his bill. Are the owners likely to prosecute?” So, if you think about that… Wow, you’ve got to know he’s a doctor, the call he got is probably a medical emergency, you have to infer that he eats there a lot, that they know who he is, they might even know he’s a doctor. Are they going to prosecute? So, it’s a gazillion kind of social things that you have to know in order to answer that question.

Now, is that also on the same trajectory as solving twelfth grade science problems? Or is that question that I posed, would that require an AGI to answer?

Well, one of the things that we’ve learned is that, whenever you define a task—say answering story types of questions that involve social nuance, and maybe would involve ethical and practical considerations—that is on the trajectory of our research. You can imagine Aristo, over time, being challenged by these more nuanced questions.

But, again, we’ve gotten so good at identifying those tasks, building training sets, building models and then answering those questions, and that program might get good at answering those questions but still have a hard time crossing the street. Still have a hard time reading a poem or telling a joke.

So, the key to AGI is the “G”; the generality is surprisingly elusive. And that’s the amazing thing, because that four-year-old that we were talking about has generality in spades, even though she’s not necessarily a great chess player or a great Go player. So that’s what we learned.

As our AI technology evolves, we keep learning about what is the most elusive aspect of AI. At first, if you read some of the stuff that was written in the ‘60s and the ‘70s, people were very skeptical that the program could ever play chess, because that was really seen as, very intelligent people are very good chess players.

And then, that became solved, and people talked about learning. They said, “Well, gosh, but programs can’t learn.” And as we’ve gotten better, at least at certain kinds of learning, now the emphasis is on generality, right? How do we build a general program, given that all of our successes, whether it’s poker or chess or certain kinds of question answering, have been on very narrow tasks?

So, one sentence I read about Aristo says, “The focus of the project is explained by the guiding philosophy that artificial intelligence is about having a mental model for how things operate, and refining that mental model based on new knowledge.” Can you break that down for us? What do you mean?

Well, I think, again, lots of things. But I think a key thing not to forget—and it goes from your favorite question about a nickel and the sun—is that so much of what we do makes use of background knowledge, just extensive knowledge of facts, of words, of all kinds of social nuances, etc., etc.

And the hottest thing going is deep learning methods. Deep learning methods are responsible for the success in Go, but the thing to remember is that often, at least by any classical definition, those programs are very knowledge-poor. If you could talk to them and ask them, “What do you know?”, you’d find out that—while they may have stored a lot of implicit information, say, about the game of Go—they don’t know a whole heck of a lot.

I like to say—and that, of course, touches onto the topic of consciousness, which I understand is also covered in your book—I asked AlphaGo, “Hey, did you know you won?” AlphaGo can’t answer that question. And it’s not because it doesn’t understand natural languages. It’s not conscious.

Kasparov said that about Deep Blue. He said, “Well, at least it can’t gloat. At least it doesn’t know that it beat me.” To that point, Claude Shannon wrote about computers playing chess back in the ‘50s. But it was a lot of work. It was an enormous amount of work. It took the best minds a long time to build something that could beat Kasparov. Do you think that that is the kind of a hump, that something like that is generalizable to a lot of other things?

Or is it kind of like, “Well, we know how to play chess, computers can play chess, but really…” Am I hearing you correctly that that is not a step towards anything general? That’s a whole different kind of thing, and therefore Aristo is, kind of, doing something very different than AlphaGo or chess, or Jeopardy?

I do think that we can generalize from that experience. But I think that generalization isn’t always the one that people make. So what we can generalize is that, when we have a very clear—what’s called an ‘objective function’ or ‘performance criteria’—basically it’s very clear who won and who lost.

And we have a lot of data that as computer scientists we’re very, very good—and it still, as you mentioned, took decades—but we’re very, very good at continuing to chip away at that with faster computers, more data, more sophisticated algorithms, and ultimately solving the problem.

However, in the case of natural language: If you and I, let’s say we’re having a conversation here on this podcast—who won that conversation? Let’s say I want to do a better job if you ever invite me for another podcast. How do I do that? And if my method for getting better involves looking at literally millions of training examples, you’re not going to do millions of podcasts. Right?

So you’re right, that a very different thing needs to happen when things are vaguer, or more uncertain, or more nuanced, when there’s less training data, etc., etc.—all these characteristics that make Aristo and some of our other projects very, very different than chess or Go.

So, where is Aristo? Give me a question it can answer and a question it can’t. Or is that even a cogent question? Where are you with it?

First of all, we keep track of our scores. So, I can give you an example in a second. But when we look at what we call ‘non-diagram multiple choice’—questions that are purely in language, because diagrams can be challenging for the machine to interpret—we’ve been able to reach very close to eighty percent correctness. Eighty percent accuracy on non-diagram multiple choice questions for fourth grade.

When you say any questions, there we’re at sixty percent. Which is either great, because when we started—all these questions with diagrams and what’s called ‘direct answer questions’, where you had to answer them with a phrase or a sentence, you don’t just get to choose between four choices—we were close to twenty percent. We were far lower.

So, we’ve made a lot of progress, so that’s on the glass-half-full side. And the glass-half-empty side, we’re still getting a D on a fourth-grade science test. So it’s all a question of how you look at it. Now, when you ask, “What questions can we solve?” We actually have a demo on our website, on AllenAI.org, that illustrates some of these.

If I go to the Aristo project there, and I click on “live demo,” I see questions like, “What is the main source of energy for the water cycle?” Or even, “The diagram below shows a food chain. If the wheat plants died, the population of mice would likely _______?” So, these are fairly complex questions, right?

But they’re not paragraph-long, and the thing that we’re still struggling with is what we call ‘brittleness’. If you take any one of these questions that we can answer, and then change the way you ask the question a bit, all of a sudden we fail. This is, by the way, a characteristic of many AI systems, this notion of brittleness—where a small change that a human might say, “Oh, that’s no different at all.” It can make a big difference to the machine.

It’s true. I’ve been playing around with an Amazon Alexa, and I noticed that if I say, “How many countries are there?” it gives me one number. If I say, “How many countries are there in the world?” it gives me a different number. Even though a human would see that as the same question. Is that the sort of thing you’re talking about?

That’s exactly the sort of thing I’m talking about, and it’s very frustrating. And, by the way, Alexa and Siri, for the people who want to take the pulse of AI—I mean, again, we’re one of the largest nonprofit AI research institutes in the world, but we’re still pretty small at 72 people… Alexa or Siri, that’s for-profit companies; there are thousands of people working on those, and it’s still the case that you can’t carry on a halfway decent dialogue with these programs.

And I’m not talking about the cutesy answers about, you know, “Siri, what are you doing tonight?” Or, “Are you better than Alexa?” I’m talking about, let’s say, the kind of dialogue you’d have with a concierge of a hotel, to help you find a good restaurant downtown. And, again, it’s because how do you score dialogues? Right? Who won the dialogue? All those questions, that are very easy to solve in games, are not even really well-posed in the context of a dialogue.

I pinned an article about how—and I have to whisper her name, otherwise it will start talking to me—Alexa and Google Assistant give you different answers to factual questions.

So if you ask, “How many seconds are there in a year?” they give you different answers. And if you say, “Who designed the American flag?” they’ll give you different answers. And when you run it down—seconds in a year, you would think that’s an objective—there’s a right and a wrong. But one gives you a calendar year, and one gives you a solar year, which is a quarter-day different.

And with the American flag, if you think about it, one says Betsy Ross, and the other one says the person who designed the 50-star configuration of the flag, which is our current flag. And in the end, both times those were the questioner’s fault, because the question itself is inherently vague, right? And so, even if the system’s good, [if] the questions are poorly phrased, it still breaks, right? It’s still brittle.

I would say that it’s the computer’s fault. In other words, again, an aspect of intelligence is being able to answer vague questions and being able to explain yourself. But these systems, even if their fact store is enormous—and one day, they’ll certainly exceed ours—if all it can do when you say, “Well, why did you give me this number?” is say, “Well, I found it here.”

Then really it’s a big lookup table. It’s not able to deal with the vagueness, or to explain itself in a more meaningful way. What if you put the number three in that table? You ask, “How many seconds are there in a year?” The program would happily say, “Three.” And you say, “Does that really make sense?” And it would say, “Oh, I can’t answer that question.” Right? Whereas a person, would say, “Wait a minute. It can’t be three seconds in a year. That just doesn’t make sense!” Right? So, we have such a long way to go.

Right. Well, let’s talk about that. We’ve got three more projects to discuss, but you’re undoubtedly familiar with John Searle’s Chinese Room question, and I’ll set it up just for the—because what I’m going to ask you is, is it possible for a computer to ever understand anything?

The setup, very briefly—I mean, I encourage people to look it up—is that there’s a person in a room and he doesn’t speak any Chinese, and he’s given Chinese questions, and he’s got all these books he can look it up in, but he just copies characters down and hands them back. And he doesn’t know if he’s talking about cholera or coffee beans or what have you. And the analogy is, obviously, that’s what a computer does. So can a computer actually understand anything?

You know, the Chinese Room thought experiment is really one of the most tantalizing and fun aspects or thought experiments in philosophy of mind, and so many articles have been written about it, arguing this, that or the other thing. In short, I think it does expose some of the issues and, you know—the bottom line is when you, kind of, look under the hood at this Chinese Room and the system there, you say, “Gosh, it sure seems like it doesn’t understand anything.”

And when you take a computer apart, you say, “Gosh, how could it understand? It’s just a bunch of circuits and wires and chips.” The only problem with that line of reasoning is, it turns out that if you look under the hood in a person’s mind—in other words, if you look at their brain, you see the same thing. You see neurons and ion potentials and chemical processes and neurotransmitters and hormones.

And when you look at it at that level, surely, neurons can’t understand anything either. I think, again, without getting to a whole other podcast on the Chinese Room, I think that it’s a fascinating thing to think about, but it’s a little bit misleading. Understanding is something that emerges from a complex technical system. That technical system could be built on top of neurons, or it could be built on top of circuits and chips. It’s an emergent phenomenon.

Well, that also would be another one, because I would then ask you, is it strong emergence or is it weak emergence? But, as I said, we’ve got three more projects to discuss. Let’s talk about Euclid.

Euclid is, really, a sibling of Aristo, and in Euclid we’re looking at SAT math problems. The Euclid problems are easier in the sense that you don’t need all this background knowledge to answer these pure math questions. You surely need a lot less of that. However, you really need to very fully and comprehensively understand the sentence. So, I’ll give you my favorite example…

This is a question that is based on a story about Ramanujan, the Indian number theorist. He said, “What’s the smallest number that’s the sum of two cubes in two different ways?” And the answer to that question is a particular number, which, again, the listeners can look up on Google. But, to answer that correctly, you really have to fully parse that rather long and complicated sentence and understand ‘the sum of two cubes in two different ways’.

What on earth does that mean?

And so, Euclid is working to have a full understanding of sentences and paragraphs, which are the kind of questions that we have on the SATs, whereas often with Aristo—and certainly, you know, with things like Watson and Jeopardy, [where] you could get away with a much more approximate understanding—this question is sort of about this. There’s no ‘sort of’ when you’re dealing with math questions, and you have to give the answer.

And so that is, as you say, a sibling to Aristo; but Plato, the third one we’re going to discuss, is something very different, right?

Right. Maybe if we’re using this family metaphor, Plato is Aristo’s and Euclid’s cousin, and what’s going on there is we don’t have a natural benchmark test, but we’re very, very interested in vision. We’ve realized that a lot of the questions that we want to address, a lot of the knowledge that is present in the world isn’t expressed in text, certainly not in any convenient way.

One great way to learn about the sizes of things, not just the sun and a nickel, but maybe even a giraffe and a butterfly… You’re not going to find the sentence that says, “A giraffe is much bigger than a butterfly.” But if you see pictures of them, you can make that connection. Plato is about extracting knowledge from images, from videos, from diagrams, and being able to reason over that to draw conclusions.

So, Ali Farhadi, who leads that project and who shares his time between us and the Allen School at University of Washington, has done an amazing job generating result after result, where we’re able to do remarkable things based on images. And my favorite example of this—and you kind of have to visualize it: Just imagine drawing a diagonal line and then a ball on top of that line.

What’s going to happen to that ball?

Well, if you can visualize it, of course the ball’s going to roll down the line, and it’s going to roll downhill. It turns out that most algorithms are actually really challenged to make that kind of prediction, because to make that kind of prediction, you have to actually reason about what’s going on. It’s not just enough to say, “There’s a ball here on a line,” but you have to understand that this is a slope, and [that] gravity is going to come into play, and predict what’s going to happen. So, we really have some of the state-of-the-art capabilities, in terms of reasoning over images and making predictions.

Isn’t video a whole different thing, because you’re really looking at the differences between images, or is it the same basic technology?

At a technical level, there are many differences. But actually, the elegant thing about video, because… As you intimated, right, a video is just a sequence of images. It’s really our eye, or our mind, that constructs the continuous motion. All it is, is a number of images shown per second. Well, for us, it’s the wonderful source of training data, because I can take the image at Second 1 and make a prediction about what’s going to happen in Second 2.

And then I can look at what happened at Second 2, and see whether the prediction was correct or not. Did the ball roll down the hill? Did the butterfly land on the giraffe? So there’s a lot of commonalities, and video is actually a very rich source of images and training data.

One of the challenges with images is—well, let me give an example, then we can discuss it… If I lived on a cul-de-sac, and let’s say the couple across the street were expecting, and the woman is nine months pregnant… And one time I get up at three in the morning and I look out the window and their car is gone: I would say, “Aha, they must have gone to the hospital.” In other words, I’m reasoning from what’s not in the image. That would be really hard, wouldn’t it?

Yes. You’re way ahead of Plato. It’s very, very true. But to anticipate that, you’d go to Semantic Scholar; I want to make sure that we get to that. With Semantic Scholar, a number of the capabilities that we see in these other projects come together. Semantic Scholar is a scientific search engine, it’s available 24/7 at semanticscholar.org and it allows people to look for computer science papers, for neuroscience papers. Soon we’re going to be launching the ability to cover all the papers in biomedicine that are available on engines like PubMed.

And what we’re trying to do there, though, is deal with the fact that there are so many, you know, over a hundred million scientific research papers, and more are coming out every day, and it’s virtually impossible for anybody to keep up. Our nickname for Semantic Scholar sometimes is Da Vinci, because we say Da Vinci was the last Renaissance man, right?

The person who, kind of, knew all of science. There are no Renaissance men or women anymore, because we just can’t keep up. And that’s a great place for AI to help us, to make scientists more efficient in their literature searches, more efficient in their abilities to generate hypotheses and design experiments.

That’s what we’re trying to do with Semantic Scholar, and that involves understanding language, and that involves understanding images and diagrams, and it involves a lot more.

Why do you think the semantic web hasn’t taken off more, and what is your prediction about the semantic web?

I think it’s important to distinguish between ‘semantics’, as we use it at Semantic Scholar, and ‘semantic’ in the semantic web. In Semantic Scholar, we try to associate semantic information with text. For example, this paper is about a particular brain region, or this paper uses fMRI methodology, etc. It’s pretty simple semantic distinctions.

The semantic web was a very rich notion of semantics that, frankly, is superhuman and is way, way, way beyond what we can do in a distributed world. So that vision by Tim Berners-Lee really evolved over the years into something called ‘linked open data’, where, again, the semantics is very simple and the emphasis is much more about different players on the web linking their data together.

I think that very, very few people are working on the original notion of the semantic web, because it’s just way too hard.

I’m just curious, this is a somewhat frivolous question: But the names of your projects don’t seem to follow an overarching meaning scheme. Is that because they were created and named elsewhere or what?

Well, it’s because, you know… If you let a computer scientist, which is me, if you put him or her in charge of branding, you’re going to run into problems. So, I think, Aristo and Euclid are what we started with and those were, kind of, you know, roughly analogous. Then we added Plato, which is an imperfect name, but still roughly in the mythological world. And then Semantic Scholar really is a play off of Google Scholar.

So Semantic Scholar is, if you will, really the odd duck here. And when we had a project, we were considering doing work on dialogue—which we still are—we called that project Socrates. But then I’m also thinking “Right, dude. Do we really want, you know, all the projects to be named after men?” which is definitely not our intent. So, I think the bottom line is it’s an imperfect naming scheme—is what you’re keying into—and it’s all my fault.

So, the mission of the Allen Institute for AI is, quote: “Our mission is to contribute to humanity through high-impact AI research and engineering.” Talk to me about the “contribute to humanity” part of that. What do you envision? What do you hope comes of all of this?

Sure. So, I think that when we started, we realized that so often AI is either vilified—particularly in Hollywood films, but also by folks like Stephen Hawking and Elon Musk—and we wanted to emphasize AI for the common good, AI for humanity, where we saw some real benefits to it.

And also, in a lot of for-profit companies, AI is used to target advertising, or to get you to buy more things, you know, in various ways, [or] to violate your privacy, if it’s being used by intelligence agencies or by aggressive marketing. And we really wanted to find places like Semantic Scholar, where AI can help solve some of humanity’s thorniest problems by helping scientists.

And so, that’s where it comes from; it’s a contrast to these other, either more negative uses, or more negative views of AI. And we’ve been really pleased that, since we were founded, organizations like OpenAI or the Partnership on AI, which is an industry consortium, have adopted missions that are very consistent and kind of echo ours, you know: AI to benefit humanity and society and things like that. So it seems like more and more of us in the field are really focused on using AI for good.

So you mentioned, you know, fear of AI and the fear manifest—and you can kind of understand Hollywood. I mean, it’s drama, right… But [that] fear manifests in two different ways. One is what you alluded to, that it’s somehow bad, you know, Terminator or what have you. But the other one that is on everybody’s mind is, what do you think about AI’s effect on employment and jobs?

I think that’s a very serious concern. As you can tell, I’m not a big fan of the doomsday scenarios about AI. I tell people we should not confuse science with science fiction. But another reason why we shouldn’t concern ourselves with Skynet and doomsday scenarios is because we have a lot more realistic and pressing problems to worry about. And that, for example, is AIs impact on jobs. That’s a very real concern.

We’ll see it in the transportation sector, I predict, particularly soon. Where truck drivers and Uber drivers and so on are going to be gradually squeezed out of the market, and that’s a very significant number of workers. And it’s a challenge, of course, to help these people to retrain them, to help them find other jobs in an increasingly digital economy.

But, you know… The history of the United States, at least, over the past couple of hundred years, there have been a number of really disruptive technologies that have come along. I mean, you know, the electrification of industry, the mechanization of industry, the replacement of animal power, going into steam.

I mean, things really impacted quickly. And unemployment never once budged because of that. Because what happens is, people just use the new technology. And isn’t it at least possible that, as we move along with the development of artificial intelligence, that it actually is an empowering technology that lets people use it to increase their own productivity? Like, anybody could use it to increase their productivity.

I do think that AI will have that role, and I do think that, as you intimated, these technological forces have some real positives. So, the reason that we have, you know, phones and cars and washing machines and all these things that make our lives better and that are broadly shared through society and modern medicine, and so on, is because of technological advances. So I don’t think of these technological advances, including AI advances, as either a) negative; or b) avoidable.

If we say, “Okay, we’re not going to have AI,” or “We’re not going to have computers,” well, other countries will and they’ll overtake us. I think that it’s very, very difficult, if not impossible to stop broad-based technology change. Narrow technologies that are particularly, you know, terrible, like landmines or biological weapons, we’ve been able to stop. But I think AI isn’t stoppable because it’s much broader, and it’s not something that it should be stopped, it’s not like that.

So I very much agree with what you said, but with one key caveat. We survived those things and we emerged thriving, but the disruption over significant periods of time and for millions of people were very, very difficult. So right as we went from a society that’s whatever, you know, ninety-something percent agricultural to one where there’s only two percent workers in agriculture—people suffered and people were unemployed.

And so, I do think that we need to have the programs in place to help people with these transitions. And I don’t think that they’re simple because some people say, “Sure, those old jobs went away, but look at all these great jobs. You know, web developer, computer programmer, somebody who leverages these technologies to make themselves more effective at their jobs.” That’s true, but the reality is a lot more complicated. Are all these truck drivers really going to become web developers?

Well, well, I don’t think that’s the argument, right? The argument is that everybody moves one small notch up. So somebody who was a math teacher in a college, maybe becomes a web developer, and a high school teacher becomes the college teacher, and then a substitute teacher gets the full time job.

Nobody says, “Oh, no, no, we’re going to take these people, you know, who have less training and we’re going to put them in these highly technical jobs.” That’s not what happened in the past either, right? Like, everybody just has to do something… The question is can everybody do a job a little more complicated than the one they have today? And if the answer to that is yes, then do we have a big disruption coming?

Well, first of all, you’re making a fair point. I was oversimplifying by mapping the truck drivers to the developers. But, at the same time, I think we need to remember that these changes are very disruptive. And, so, the easiest example to give, because it’s fresh in my mind and, I think, other people’s mind—let’s look at Detroit. This isn’t technological changes, it’s more due to globalization and to the shifting of manufacturing jobs out of the US.

But nevertheless, these people didn’t just each take a little step up or a little step to the right, whatever you wanted to say. These people and their families suffered tremendously. And it’s had very significant ramifications, including Detroit going bankrupt, including many people losing their health care, including the vote for President Trump. So I think if you think on a twenty-year time scale, will the negative changes be offset by positive changes? Yes, to a large extent. But if you think on shorter time scales, and you think about particular populations, I don’t think we can just say “Hey, it’s going to all be alright.” I think we have a lot of work to do.

Well, I’m with you there, and if there’s anything that I think we can take comfort in, it’s that the country did that before. There used to be a debate in the country about whether post-literacy education was worth it. This was back when we were an agricultural… And you can understand the logic, right? “Well once somebody learns to read, why do you need to keep them in school?”

And then, people said, “Well, the jobs of the future are going to need a lot more skills.” And so that’s why the United States became the first country in the world to guarantee a high school education to every single person. And it sounds like you’re saying something like that, where we need to make sure that our education opportunities stay in sync with the requirements of the jobs we’re creating.

Absolutely. And then, it’s just a question of… I think we are agreeing that there’s a tremendous potential for this to be positive, you know? Some people, again, have a doomsday scenario for jobs and society. And I agree with you a hundred percent; I don’t buy into that. And it sounds like we also agree, though, that there are things that we could do to make these transitions smoother and easier on large segments of society.

And it definitely has to do with improving education and finding opportunities etc., etc. So, I think it’s really a question of how painful will this change be, and how long will it take until we’re at a new equilibrium that, by the way, could be a fantastic one? Because, you know, the interesting thing about the truck jobs, and the toll jobs that went away, and a lot of other jobs that went away; some of these jobs are awful.

They’re terrible, right? People aren’t excited about a lot of these jobs. They do them because they don’t have something better. If we can offer them something better, then the world will be a better place.

Absolutely. So we’ve talked about AGI… I’ve referenced it. I assume you think that we’ll eventually build a general intelligence.

I do think so. I think it will easily take more than twenty-five years, it could take as long as a thousand years, but I’m what’s called a materialist; which doesn’t mean that I like to shop on Amazon; it means that I believe that when you get down to it, we’re constructed out of atoms and molecules, and there’s nothing magical about intelligence.

Sorry—there’s something tremendously magical about it, but there’s nothing ineffable about it. And, so, I think that, ultimately, we will build computer programs that can do and exceed what we can do.

So, by extension, you believe that we’ll build conscious machines as well?

Yes. I think consciousness emerges from it. I don’t think there’s anything uniquely human or biological about consciousness.

And the range of time that people think it will be before we create an AGI, in my personal conversations, range from five to five hundred years. Where in that spectrum would you cast your ballot?

Well, I would give anyone a thousand-to-one odds that it won’t happen in the next five years. I’ll bet ten dollars against ten thousand dollars, because I’m in the trenches working on these problems right now and we are just so, so far from anything remotely resembling an AGI. And I don’t know anybody in the field who would say or think otherwise.

I know there are some, you know, so-called futurists or what have you… But people actively working on AI don’t see that. And furthermore, even if somebody says some random thing, then I would ask them, “Back it up with data.” What’s your basis for saying that? Look at our progress rates on specific benchmarks and challenges; they’re very promising but they’re very promising for a very narrow task, like object detection or speech recognition or language understanding etc., etc.

Now, when you go beyond ten, twenty, thirty years, who can predict what will happen? So I’m very comfortable saying it won’t happen in the next twenty-five years, and I think that it is extremely difficult to predict beyond that, whether it’s fifty or a hundred or more, I couldn’t tell you.

So, do you think we have all the parts we need to build an AGI? Are we on a path, or is it going to take some breakthrough we can’t even fathom right now? Or with enough deep learning and faster processors and better algorithms and more data, could you say we are on a path to it now? Or is your sole reason for believing we’re going to build an AGI is [that] you’re a materialist—you know, we’re made of atoms, we can build something made of atoms.

I think it’s going to require multiple breakthroughs which are very difficult to imagine today.

And let me give you a pretty concrete example of that.

We want to take the information that’s in text and images and videos and all that, and represent that internally using a representation language that captures the meaning, the gist of it, like a listener to this podcast has kind of a gist of what we’ve talked about. We don’t even know what that language looks like. We have various representational languages, none of them are equal to the task.

Let me give you another way to think about it as a thought experiment. Let’s suppose I was able to give you a computer, a computer that was as fast as I wanted, with as much memory as I wanted. Using that unbelievable computer, would I now be able to construct an artificial intelligence that’s human-level? The answer is, “No.”

And it’s not about me. None of us can.

So, if it was really about just the speed and so on, then I would be a lot more optimistic about doing it in a short term, because we’re so good at making it run two times faster, making it run ten times faster, building a faster computer, storing information. We used to store it on floppy disk, and now we store it here. Next we’re going to be storing it in DNA. This exponential march of technology under Moore’s Law—keep getting faster and cheaper—in that sense, is phenomenal. But that’s not enough to achieve AGI.

Final question… Earlier you said that you tell people not to get confused with science and science fiction. But, about science fiction… Is there anything that you’ve seen, read, watched that you actually think is a realistic scenario of what we may be able to do, what the future may hold? Is there anything that you look at and say, well, it’s fiction, but it’s possible?

You know, one of my favorite pieces of fiction is the book Snow Crash, where it, kind of, sketches this future of Facebook and [the] future of our society and so on. If I were to recommend one book, it would be that. I think a lot of the books about AI are long on science fiction and short on what you call hard science fiction; short on reality.

And if we’re talking about science fiction, I’d love to end with a note where, you know, there’s this famous Arthur C. Clarke [quote] saying that, “A sufficiently advanced technology is indistinguishable from magic.” So, I think, to a lot of people AI seems like magic, right? We can beat the world champion in Go—and my message to people, again, as somebody who works in the field day in and day out, it couldn’t be further from magic.

It’s blood, sweat, and tears—and, by the way, human blood, sweat and tears—of really talented people, to achieve the limited successes that we’ve had in AI. And AlphaGo, by the way, is the ultimate illustration of that. Because it’s not that AlphaGo defeated Lee Sedol, or the machine defeated the human. It’s this remarkably-talented team of engineers and scientists at Google, working at Google DeepMind, working for years; they’re the ones who defeated Lee Sedol, with some help from technology.

Alright. Well, that’s a great place to leave it, and I want to thank you so much. It’s been fascinating.

It’s a real pleasure for me, and I look forward both to listening to this podcast, to your other ones, and to reading your book.

Thank you.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here