Episode 72: A Conversation with Irving Wladawsky-Berger

In this episode Byron and Irving discuss the complexity of the human brain, the possibility of AGI and its origins, the implications of AI in weapons, and where else AI has and could take us.

:: ::

Guest

Irving Berger has a PhD in Physics from the University of Chicago, is a research affiliate with the MIT Sloan School of Management, he is a guest columnist for the Wall Street Journal and CIO Journal, he is an agent professor of the Imperial College of London, and he is a fellow for the Center for Global Enterprise.

Transcript

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I'm Byron Reese. Today our guest is Irving Wladawsky-Berger. He is a bunch of things. He is a research affiliate with the MIT Sloan School of Management. He is a guest columnist for the Wall Street Journaland CIO Journal. He is an adjunct professor of the Imperial College of London. He is a fellow for the Center for Global Enterprise, and I think a whole lot more things. Welcome to the show, Irving.

Irving Wladawsky-Berger: Byron it's a pleasure to be here with you.

So, that's a lot of things you do. What do you spend most of your time doing?

Well, I spend most of my time these days either in MIT-oriented activities or writing my weekly columns, [which] take quite a bit of time. So, those two are a combination, and then, of course, doing activities like this – talking to you about AI and related topics.

So, you have an M.S. and a Ph.D. in Physics from the University of Chicago. Tell me... how does artificial intelligence play into the stuff you do on a regular basis?

Well, first of all, I got my Ph.D. in Physics in Chicago in 1970. I then joined IBM research in Computer Science. I switched fields from Physics to Computer Science because as I was getting my degree in the ‘60s, I spent most of my time computing.

And then you spent 37 years at IBM, right?

Yeah, then I spent 37 years at IBM working full time, and another three and a half years as a consultant. So, I joined IBM research in 1970, and then about four years later my first management job was to organize an AI group. Now, Byron, AI in 1974 was very very very different from AI in 2018. I'm sure you're familiar with the whole history of AI. If not, I can just briefly tell you about the evolution. I've seen it, having been involved with it in one way or another for all these years.

So, back then did you ever have occasion to meet [John] McCarthy or any of the people at the Dartmouth [Summer Research Project]?

Yeah, yeah.

So, tell me about that. Tell me about the early early days in AI, before we jump into today.

I knew people at the MIT AI lab... Marvin Minsky, McCarthy, and there were a number of other people. You know, what's interesting is at the time the approach to AI was to try to program intelligence, writing it in Lisp, which John McCarthy invented as a special programming language; writing in rules-based languages; writing in Prolog. At the time – remember this was years ago – they all thought that you could get AI done that way and it was just a matter of time before computers got fast enough for this to work. Clearly that approach toward artificial intelligence didn't work at all. You couldn't program something like intelligence when we didn't understand at all how it worked...

Well, to pause right there for just a second... The reason they believed that – and it was a reasonable assumption – the reason they believed it is because they looked at things like Isaac Newton coming up with three laws that covered planetary motion, and Maxwell and different physical systems that only were governed by two or three simple laws and they hoped intelligence was. Do you think there's any aspect of intelligence that's really simple and we just haven't stumbled across it, that you just iterate something over and over again? Any aspect of intelligence that's like that?

I don't think so, and in fact my analogy... and I'm glad you brought up Isaac Newton. This goes back to physics, which is what I got my degrees in. This is like comparing classical mechanics, which is deterministic. You know, you can tell precisely, based on classical mechanics, the motion of planets. If you throw a baseball, where is it going to go, etc. And as we know, classical mechanics does not work at the atomic and subatomic level.

We have something called quantum mechanics, and in quantum mechanics, nothing is deterministic. You can only tell what things are going to do based on something called a wave function, which gives you probability. I really believe that AI is like that, that it is so complicated, so emergent, so chaotic; etc., that the way to deal with AI is in a more probabilistic way. That has worked extremely well, and the previous approach where we try to write things down in a sort of deterministic way like classical mechanics, that just didn't work.

Byron, imagine if I asked you to write down specifically how you learned to ride a bicycle. I bet you won't be able to do it. I mean, you can write a poem about it. But if I say, “No, no, I want a computer program that tells me precisely...” If I say, "Byron I know you know how to recognize a cat. Tell me how you do it.” I don't think you'll be able to tell me, and that's why that approach didn't work.

And then, lo and behold, in the ‘90s we discovered that there was a whole different approach to AI based on getting lots and lots of data in very fast computers, analyzing the data, and then something like intelligence starts coming out of all that. I don't know if it's intelligence, but it doesn't matter.

I really think that to a lot of people the real point where that hit home is when in the late ‘90s, IBM's Deep Blue supercomputer, beat Garry Kasparov in a very famous [chess]match. I don't know, Byron, if you remember that.

I remember Kasparov said, "Well at least it didn't enjoy beating me." Going back a second to the cat example... So, the way we teach machines to recognize cats... you're right, we feed them a bunch of cats and a bunch of not-cats, and they split the cats up into ever smaller groups of pixels and all of that... but that isn't the way people recognize cats. How do you think people recognize cats?

Wait, wait, wait. That's a very good question. In fact, MIT, which I'm affiliated with, not only has a lot of great work in artificial intelligence, they have a lot of great work in brain science, in cognitive computing... how people recognize cats and things like that. I remember attending a conference at MIT on AI. I think it was November of 2017. One of the top people doing cognitive science gave a talk, and he said that in his opinion a three-month-old is smarter than any AI he has seen. And here is the reason: Whereas in AI you generally start with nothing, and then you have to feed it data, with humans we start with millions of years of evolution, so our brains have evolved to quickly learn certain things. Otherwise we wouldn't have survived. And so the three-month-old infant, let alone a six-month-old or a year-old, has a structure of the brain that sort of makes it much easier to recognize the faces of his or her parents, as well as a cat and other things. In fact, we don't know exactly how that is done.

Part of this initiative at MIT, called the MIT Quest for Intelligence, is to study both AI and brain science and try to learn from each. But I think it's important to note that they are very different, and we know very little about how the brain works.

So, you say that if the machine can tell the difference... You said something a moment ago to the effect that, “The machine can spot a cat, and is that intelligence? I don't know, but it doesn't really matter.” Why do you say it doesn't matter?

Let me tell you. I meant something really simple. When a plane flies, we can argue, “Wait a second, Irving, I don't know if the plane is flying. It's aerodynamic; it does it very differently from a bird.” And I would say, “Yes but it's engineering. The goddamn plane got off the ground and took you from...” You live in San Francisco, right?

Austin, Texas.

Oh, my god, you're in Austin, Texas. Okay, it will take you from Austin to London. But that's engineering. And, when we have an exquisite deep-learning algorithm that you have trained by showing lots and lots of pictures of cats and not-cats as you said, and now that algorithm knows how to do that... that is great engineering.

When you have a machine translation program that you have trained to translate from English to Spanish by showing it lots and lots and lots of documents in both English and Spanish and then it learns how to do that... that is exquisite engineering. If you ask me, "Yes, Irving, but is it also intelligent?" Now, we are better off going out for a really good micro-brewery beer in Austin, because this is a philosophical question. I'm trying to make a distinction between really good engineering to solve problems and the more important philosophical questions about the nature of intelligence.

The way that we do AI right now, the way that you were describing it, the machine learning specifically... what are we able to do with that? How far are we able to take that? Can it be creative? Can it pass the Turing test? Can it develop empathy? How far is that one little trick of, let's just give it a million cats and it'll learn to identify cats, how far can that trick be taken?

That's an excellent question, and there have been a number of very interesting recent papers on the limits of machine and deep learning. The bulk of the consensus in these papers is that machine learning methods – of which deep learning is one – works best when you have huge amounts of data, and you have a relatively static problem that you're trying to solve. By static, I mean that what a cat looks like doesn't change every day. So, if you take huge amounts of data and you feed it, then you don't have to keep retraining it every day. That would be good enough.

Or if you want to do machine translation from English to Spanish, the structure and grammar of those languages don't change that much. So that works really well. Now, once you start getting into human behaviors, you're in a different place. One of the people I work with at MIT, Professor Sandy Pentland at the Media Lab, has done really good research in this, and he wrote a book called "Social Physics." What happens with human behavior is that it changes often. Humans at some level “are a pain in the ass." Their interactions have a lot of variance, and by variance I mean it changes all the time. So the classic approaches of deep learning, where you have a static set of data doesn't quite work as well.

So, Sandy has developed some methodologies to try to figure out some principles of human behavior that can be applied to complement machine-learning methods. If you look at his Social Physics book and other papers, they have found out that humans learn from each other. That's actually part of our evolutionary selection, and so humans tend to group themselves in clusters of common behaviors. If you find, Irving seems to be part of this group that likes certain things, and if that same group likes other things, the probability is higher that Irving will be part of that group... just a simple example.

Now, empathy... now we're into really murky territory because it's very difficult... Do I think deep learning can get us empathy? I don't know. But my expectation is that we will complement the more brute force machine learning methods with principles of human behavior like Sandy has developed in Social Physics, or other people will develop to try to have something like empathy. I mean it will be the combination of multiple methods that will get us there. I don't think it will just be deep learning. By the way that's perfectly fine. Nobody has ever said that AI, even in looking at engineering, must only work with machine learning. It's perfectly fine to say it uses multiple methods and needs the combination of the various methods that solve problems. That's my feeling, but remember it's a feeling. Nobody really knows and there are multiple points of view.

So, earlier you did the analogy of classical physics to quantum mechanics, quantum physics. Did you mean that literally? Do you believe, like Penrose and others believe, that human intelligence and consciousness specifically are in fact quantum phenomenon? Or are you using that purely as an analogy?

I meant it as an analogy.

Perfect, so my question to you...

Wait, and the analogy was deterministic versus probabilistic. That was my analogy.

Right. So my question to you is... Do you believe that the human brain is deterministic or not?

I suspect it's not.

So, what would it be?

Remember, this is the beauty of it all... that you can say with different probabilities, “How did Byron feel this morning? Was he happy? Was he sad? Why was he happy? Why was he sad?” We might be able to look at the kinds of days Byron had in the last week. Maybe we can say with a certain probability, this is how Byron is going to feel. Somebody can say, “Well, Irving, Byron was drinking last night, and he has a bit of a hangover.” That's the human world that affects our whole behavior, especially our brain. But you can say with certain probabilities, this is how Byron is likely to behave. I think that's human. That's what it means to be human.

The analogy with quantum mechanics breaks down because I don't think, at least so far, we don't have anything as elegant as Schrödinger’s equation or the kinds of equations that I was working with on getting my PhD in Physics at Chicago, trying to do computing. But, is it possible that in 40-50 years, we'll discover something like that? It's possible, but I don't know. So it's more of an analogy that human behaviors, especially for groups... it's easier and can best be described in a probabilistic way. And for individuals it's really difficult... really really really difficult.

But I guess the question is, at its core, in your brain there's a lot of biology, and all of biology is simply chemistry, and all chemistry is simply physics. And if it's physics, if it's classical physics, your brain, at its core, is deterministic.

It's not classical physics. I don't know what kind of physics it is, it's not classical physics. Also, remember, once you get to incredibly complex systems, then it's not quantum mechanics, but sort of chaotic behaviors take over. Emergent, dynamic behaviors... emergent behaviors take over. Let me give you an example. If you try to predict the traffic in New York City this afternoon, that's not quantum mechanics. But it's such a complex system that it makes it difficult to predict. You can say with a certain probability these are areas that are likely to be one way, that is likely to be another way, based on history... whatever. But you know, all you need is a car to break down; all you need is Trump to be in town. All you need is all kinds of things to happen, and then you get cascading effects. Once one of these things happens, traffic slows down here and then it starts slowing down someplace else, and you get a very complex system like that. And, I believe the human brain – not just the human brain, actually our whole biology – has that kind of unpredictable complexity, which is not quantum mechanics, but it's more in the world of dynamic systems. Am I making any sense?

Well, I'm still trying to decipher. You could say the same thing about a hurricane. Where is the hurricane going to go if it has a 20 percent chance of going here and a 40 percent chance... but in the end, nobody says that it isn't just the interplay of classical physics. It's just too complex to predict, but nobody says something else is going on...

That's what I'm saying. In fact, a hurricane is perfect, or the weather in general is perfect. It is not quantum mechanics, but it's an interplay of so many things that you can only predict its path with probability. If you give people a cone of probability, and then the closer you get to the time when you will see it the more we know. And if you want to know if it's going to rain in Austin later today, if you look at weather.com, it will give you probabilities. So we are in that probability stage because the systems are so incredibly complex and dynamic, rather than quantum mechanics.

So, do you think we're going to build a general intelligence in the next... well, do you think we're going to build one at all, and if so...

I don't know.

Make the case that it's impossible. What would that...

No, well... I don't know if I can make the case that it's impossible. I mean, you know... the reason I cannot make the case it's impossible... if you asked me right now, let me go back to physics. If you asked me right now, “Do you think we'll find that Einstein's theory of relativity, that said the speed of light is a constant all the time, will hold for eternity?” Well, it's held so far. Nobody's been able to do an experiment that says something went farther. Every so often people think they found some example that maybe went farther, faster than the speed of light, but so far so good. And everybody in physics agrees that a physics principle is only as good as whether you've been able to find something that violates that principle. And if you find something that violates it, then we are into a new world of physics.

I think that the proof of the pudding here is to build something that somebody will say is AGI, artificial general intelligence. Here is what I think will happen. We'll keep making progress on AI, and we'll keep making increasingly smarter AI systems, and we will... Right now for example one of the differences between a really good AI system and a human is that the really good AI systems tend to be good at one thing, whether it's recognizing human faces or doing machine translation or understanding natural language. You tend to build them with a very focused objective, and we don't have systems today that can do all those things at once.

If you asked me right now, “Gee, Irving, I heard that you're a Yankee fan. Are the Yankees going to win the one-game playoff they'll have against Oakland?” I don't know the answer. But, notice, we can go from AI to baseball very quickly. We don't have a machine that knows how to do that. Could we in 10-20 years learn how to develop machines that can switch between topics? Maybe we can. I don't know by the way whether those will be useful machines because the most useful machines would be great for a specific topic. But it's an interesting research project. And then we get from there incrementally to making them more and more... And, Byron, it could be that achieving human level intelligence, that every time we think we’re there, we find something the machines cannot yet do, and we keep investigating. And whether we're ever there or not, I have no idea.

But is it possible that we haven't even... because the way you say that, it sounds like you think that a general intelligence evolves out of sufficiently sophisticated narrow intelligences... that you teach it how to play the game of Go, and then you teach it how to make coffee, and then you teach it how to pick stocks, and then eventually it can do anything. But is it possible that you can pick stocks and you can play Go and all these other things... but that isn't how your intelligence comes about. It isn't that you learned 100 different things and somehow the sum of that is more than 100. So is it possible that a general intelligence – we haven't even started working on it yet –requires a whole different approach?

It's quite possible. What I really meant to say is, my expectation is, we will get to [an] increasingly intelligent machine that can address multiple problems in this incremental way. I've been around technology for a long time, and I've seen the progress happen incrementally. We had networking; then we had the Internet; then the Internet got better; then it was able to do more and more things. But, you have one thing, and then you want another and then another. And I suspect that’s the same with computing.

We had computers; then we had faster computers; then we had parallel computing; then we had special accelerators for parallel computers... and on and on and on. You may say, “Why are you being so pedantic? You keep coming back to the fact that you look at AI from an engineering point of view.” Because that's my goddamn experience in how we have built machines, in how we have made progress.

And remember... I suspect you will agree with me that we've made tremendous progress with the Internet, with computing and with things like that. So now is it conceivable that something will come out of left field that is absolutely different from anything we've seen before? Now we're back into... Is it possible that we'll find a whole new way of looking at physics, where the speed of light is not the limit. Is it possible? Yes, it is possible. What would it look like and so on? I don't have the faintest idea.

So, I'm being pedantic because otherwise you shouldn't be talking to me. If you said, "Well, Irving, Kurzweil said that by 2045 machines will take over because computers are getting so fast,” I don't know why Kurzweil says that. I would say, "Byron, go talk to Ray Kurzweil, who is a brilliant man, who's made huge accomplishments. Kurzweil is really smart, so go ask him.” I just don't know... his way of thinking that this happens is not something that I know how to think about.

What are your thoughts about the use of artificial intelligence in weapons systems?

You know, I think it will happen... well, not only it will happen, I think it's already happened. I suspect drones and so on already have AI, and I suspect that it will be difficult to know at what point you're using analytical data science, and at what point you're using AI. Now, if you said to me, which I suspect that this would be a following question, “Does that mean that we shouldn’t worry about AI in warfare?” But remember, we worry a lot about the use of chemicals in warfare, and in fact there are laws that I think we started [following], I believe it was after World War I, about the use of all kinds of chemicals to kill people – [most recently] with [the incident in] Syria. So, I'm hoping that we will be able to come up with similar agreements among nations about the use of...

But, why would you want to do that? Like, if artificial intelligence is used to... So, we've had landmines for 100 years, and they're powered by AI, right? If something weighs more than 50 pounds, blow up. If they made a landmine that had a camera and could make sure it was a soldier that was standing on it before it blew up, why is that not better?

I don't know the answer, but now we're into... If the object of war is to kill your enemy, what's wrong with killing your enemy with mustard gas, which is illegal. And people debated that. Remember, this is not an area I spend any time thinking [about], to be honest. But I do know that people said, shooting is okay, bombing with airplanes is okay, but using mustard gas and other things like that is not okay. My expectation is, some such agreements will be made that up to a certain point, like you just mentioned, a landmine with a sensor – maybe you had a picture or whatever – that is okay because we've been using it. But having a war, like in Star Wars, where the two armies are all robotic and you have robotic drones, and there are no humans involved except they're giving directions from, I don’t know, Nebraska or wherever the drones are flown from... I suspect that we will come up with agreements on what is okay and what's not okay, like we did in chemical warfare. And there are a number of organizations working on that. I just went to a meeting yesterday in New York organized by some people associated with the UN on things like that... not just warfare, but, some governmental AI. I'm hoping that will happen.

And to be clear, I'm not advocating for it. I'm trying to understand... and landmines are illegal, I should point that out. I'm more trying to determine on what basis people argue against it.

Let me tell you. I find that the best way to predict the future in a very complex area is to look at history, because at least history gives me some concrete examples of how did humans behave in the past, being relatively similar in environment. So, my best way of predicting what we will do about AI and warfare is to look at what we did with chemical weapons... and, by the way, what we did with nuclear weapons. You know, we've done a pretty good job in trying to make sure we don't use nuclear weapons. I think AI will have a path similar to that. That's my expectation. I may look at where we are with cybersecurity right now and cyber-attacks. As you know, you can do incredible damage with a hack and a cyber-attack. Will countries be able to get their act together in saying, “This is okay; this is not okay”? I hope so, but we haven't done that yet. But I really hope so.

So, anybody who reads any of my books or reads my writing, knows I'm an optimist. I do believe that technology is this fascinating thing that we use to multiply what we are able to do, and that over the long span of history we've made progress...

By the way, I am also an optimist. I agree with you.

Excellent, you can detect there is about to be a “but” coming up here in a second. I believe we use this technology to feed people and make the world better, and that there are more people who want to build a better world than people who want to destroy it. But, you know, we've only ever had privacy in this world because there's so many of us. You couldn't listen to every phone conversation. You couldn't follow everybody all the time. You couldn't track everybody every minute. With this technology, we all of a sudden can, can't we. Like, you can record every phone conversation, transcribe and data mine them. AI can read lips, so every camera, even without a microphone can listen. We can track everything, and the same tools we use to figure out what drugs to prescribe for cancer can be used to look for political opponents. My question to you is: Do you worry about the future of privacy or is privacy kind of...?

It’s huge. Of course, I do. It's one of the major activities I'm involved [in]with some of the groups I work with at MIT. Yes, we should worry. With any technology the way to make progress is [to] anticipate the kinds of problems you are mentioning, whether it is their use in warfare, or whether it is their use to violate people's privacy, or [to] try to influence elections or all kinds of terrible things like that. Then [we] do the research and come up with other technologies that can counterweight those, and come up with policies to make that illegal. I believe that's the only way we've made progress – to anticipate problems, and then through R&D and good policies, try to get around that. So far, humanity has done a pretty good job in doing that, as you said. You know, we haven't blown up ourselves and haven't done many bad things, because I think we tend to do the right things eventually.

What do you think though about governments around the world that make no secret they want to use these technologies to shape, in a positive way, the behavior of their citizens... to basically reward people who do actions deemed in support of the country and penalize those who do actions that are not in support of the country? So, it's good intentions, from a point of view.

I don't believe it's good intention. I'm a huge believer in democracy, and I'm a huge believer in free-market. A totalitarian government says, “We know better; you should trust us, and you should do what we tell you...” It doesn't even have to be totalitarian. Top down government... I don't think in the end those countries will be as successful as countries like ours when we are at our best. We have a Constitution, and we don’t want kings or super-powerful prime ministers or presidents. I believe that. But what happens is things play out, and you see which countries do better, and you see what has to happen.

I honestly don't think the questions you asked concerning AI would be that much different from things you could have asked about many technologies in the last 100-150 years. I mean, we've been living with major advances in technologies at least for the last 150 years with the advent of the Industrial Revolution.

And in many ways, the digital economy and technologies like AI and Blockchain and other major technologies are an evolution of advances in technology. And every time you get a new technology, people said, "My god, that feels magical." We think our age is the most advanced to every new idea. Well, remember, 100 years ago electricity, cars, airplanes... that must have felt pretty magical to the people living then. They adjusted, and airplanes had a terrible impact on warfare, and some very bad things happened, especially in the ‘30s. But humanity survived. I go back to those experiences to try to anticipate the next 30 years, say. That's my point of view.

So, you've been around in this industry so long, I assume you remember Weizenbaum and ELIZA?

Yes, I do.

So, [Joseph] Weizenbaum had a deep concern about treating these technologies... like giving them human names, or having people confide in them. You could extend that to today. We make devices that speak in human voices and have human names. We interrupt them when they're talking, and we're rude to them. There have been some really interesting things... there was a story in Japan about these kids that would abuse these robots in the shopping mall. They ended up having to program the robot. If it saw a bunch of short people (i.e. kids), and there was no tall person with them, then it should run off and find a tall person, because these kids would torment them. Asked later, the kids said that they thought the robot was really in distress. So, do you worry that having technologies that behave like humans, that we speak to, that we address by name and all of that, may in some way have a corrosive effect on the notion of human rights... that I can beat up this robot that looks like a person, or I can interrupt this system that sounds like a person or what have you, or do you not worry about that?

Yeah, I think that's a minor worry, to be honest. I mean let me give you an example. Yesterday I had to go someplace in the morning. I live in Connecticut, and the traffic in the morning is terrible during rush hour. I used my Google Maps to guide me. It did a good job, but then at one point it recommended a certain route that I was wondering if I should take because it seemed a little too fancy. I said, “Okay, let it do that,” and I kept cursing at the goddamn navigation because the route that it took me – as opposed to the one I was ready to take – turned out to be very frustrating. Now, you know, I never got an email from Google saying, “Stop cursing at Google Maps.” I think that's a minor thing.

Do I think it's good that kids learn etiquette, not to be nasty to each other, let alone other things? Absolutely. I think that's more important than ever perhaps, because... But that's been very important always, that you need to behave better, and the fact that information is so available is part of the check and balance.

You should have good behavior because it's a little bit harder to keep secrets if you don't behave well, and the consequences can be not so good. So, do your best to behave in a reasonable way. And does it look good to be abusing machines? I mean, it's a minor thing, but it's better if you don't do that. That's what I would say.

So, do you have an opinion on whether machines can someday be conscious? Whether they can experience the world and therefore experience pain?

I honestly think that we will need to... remember, the machine will only do or learn to do what we feed it data about. One way or another, either we discover some principles we use to program it, or we will have it learn through data. So, it depends what we mean by developing a conscience. My expectation is that it will continue to be a philosophical debate like the one we're having right now, whether these much more intelligent robots 30 years from now are really upset because the kids are behaving badly. Is it possible that we'll teach the robot to get upset when something like that happens? It's possible, and it's possible we may do that because we want the kids not to do that. So, when the robot gets upset it will call a human being, will tell the kids, “Please stop, this is not good behavior.” Maybe it will call 911...

Hold on, I don't want to speed past that point. What you said at the beginning of our conversation was, in these complex systems, emergence happens and these things come about. And what I just heard you say now was that the computer will only do what we program it to do and nothing more.

No, I didn't say that. A computer, depending on the circumstances, will behave in different ways. There will be unintended consequences. Maybe the computer will call 911 because it confused somebody trying to be nice to me with being in danger. But, whether we call that developing consciousness or not, that's more on the philosophical side. Whether I think it will be "more aware of its environment" in the same sense my car is more aware of its environment today than the cars I've had in the past because it knows if I'm staying in my lane; it knows how close I'm getting to the other car; it beeps me if I get too close for the speed I'm going and so on...

But to be clear, the car isn't aware of anything. Nobody thinks it is. That's a linguistic convenience – that we say the car sees. But there's a big difference between, "I touch something hot and I feel pain," but in my oven, I can put in a computer that if a temperature sensor gets to 500 degrees it can play a WAV file of a person going, "Ow! Ow! Ow!" Nobody thinks that computer is really feeling that pain. The question is: Will there be a day when the computer will say, "Ouch, that hurt"?

This is my expectation. In 30 years, that will not be any different from my car yelling, "Stop!" if I get too close to a car. That's my expectation. It is a machine that is doing what it's supposed to do. It's a very complex machine; it gets lots of sensory input; it reacts to...

Are people machines?

No, I don't think so. I view it differently, and I suspect that it's because of evolution. I mean, at that point, I would say the millions and millions and millions of years of evolution and natural selection have made us into something that is different from a machine. That's what I think.

Do you worry about the impact of these technologies on jobs and employment, or do you think they're fundamentally...?

Oh, yes. Huge! I do. In fact, if you look at my columns, I write quite a bit about that. Again, one of the major [things] I'm involved with at MIT, called the Initiative on the Digital Economy, does quite a bit of work on the future of work and things like that. The consensus of the economists that I trust the most is that every time we've had a major technology advance, many jobs go away, but many new jobs and new industries get created. There is typically a period that could be several decades where the people who were doing tasks that these new technologies automated will go through great pain. Eventually, the new jobs get created.

Now, often the same people who lost the job don't have the skills to take on the new jobs that got created. That's why, by the way, one of these things that you see becoming more and more important over the last 200 years has been education... because the more education you have, the more flexibility you have to be adaptable to multiple types of jobs. So, if what you used to do goes away, after awhile you learn to do new things, but...

Let me jump in there for a moment. I've tried hard to figure out what the half-life of a job is, and I think it's about 45 years. I think every 45 years across the last 250 years of this country's history, half the jobs vanish. That's just how it goes. Furthermore, in the United States in the last 250 years unemployment has never been outside of 5-10 percent – other than the Great Depression, which nobody says was caused by technology. But even when electricity comes out, even when steam power replaces animal power, even when the assembly line comes out, we never had unemployment spike because of those things. And finally, we've had a history of rising wages for 250 years. So somehow, we've maintained full employment...

I agree, and I am an optimist like you. And if you said to me, “Well, does this differ because it's AI?” No, I don't think it's different, and it doesn't matter what I think. But I've been in meetings, in conferences with Nobel Prize-winning economists who will say the same thing, that it's likely to also work out.

However, this is very important, it doesn't mean that millions of people will not go through great pain in the transition. And, of course, we are seeing it in our country, and we're seeing it in a number of other countries that certain jobs went away and the people who used to hold those jobs don't have the skills or maybe the wherewithal to try to adapt to a new job. They are in great pain. And, by the way, when you have that it causes political turmoil. Obviously, it's true in our time, but obviously it was true in the past. There have been such periods of political turmoil. So, I am an optimist, but let's not deny the pain of the people left behind by the new technologies. That's my only point. Byron, I expect you agree with me on that? Correct?

Well, I think the way you set the problem up I don't necessarily agree in the sense that... here's what people say. They say technology creates great new jobs like a geneticist and it destroys jobs like order taker in fast food. And then they say something that you came very close to saying which is, do you really think that order taker is going to become a geneticist? The answer to that is no, that a college biology professor becomes a geneticist, and the high school biology teacher gets the college job, and then a substitute teacher gets hired on full time at the high school all the way down. So, I think the question is not: Can the people who lose their jobs do the new jobs? The question is: Is everybody capable of doing a job a little harder than the job they have today? And I think the answer is yes.

No, I agree that everybody is – in principle – capable, but not everybody does it.

Right.

And the more education you have, the more... I really believe that one of the things you learn, the more education you have, is: How do you learn? And again this has been studied. You know, let's say early in the 20th century, at the turn of the 19th and 20th centuries, a huge number of people were employed in agriculture. You honestly didn't even need to know how to read and write to be able to do your job in agriculture. Then when we saw industrialization, and people started moving to cities, and you have factories and so on... at that point you had to learn how to read and write.

The U.S., the U.K., Germany and other economies made big advances because they instituted public education for at least grammar school. So lots of people became literate. Then things kept advancing, and let's say by the ‘50s and ‘60s more and more people were getting high school degrees. And many of the jobs in the ‘50s and ‘60s, if you had a high school degree, you could have a great job in a manufacturing plant and other things like that.

Now, having a high school degree is not enough. Now, to do better and better in the kinds of jobs that are coming, you need at least a post-high school degree, a trade school degree whatever... let alone a college degree, let alone a graduate or professional degree.

And notice, and I speak to people all the time about this... In many jobs, people keep training, you know, like the people who might do HVAC [heating and air conditioning], who come and maintain equipment in my house. They're always training because there's always new equipment that is conceived. They get sent for training and they learn something. But if they didn't have that, if they didn't have this continuous training, they would fall behind and not be able to have the jobs. So, we're living in an era where continuous training and continuous learning is more important than ever. And that's why I'm totally agreeing with you. Those people who are comfortable with the continuous training and learning will be able to keep adapting. But what we've seen empirically is that there are many millions of people who for one reason or another are not able to do that.

You know, I agree with all of that, and it's interesting to note that the United States was the first country in the world to guarantee high school education to everybody on the planet – right before we enjoyed nearly a century of uninterrupted growth and I agree that those things are highly related.

Let me make one point. We haven't discussed politics. I don't want to. But given how well guaranteeing high school worked out for all, it really makes you [wonder] why we don’t make it easier for young people to go on to get post-high school education and even college education, because history shows that this really works well for the country. And, as you know, that's become a major issue and I think it's dumb not to do that.

I think politically, the way I would sell it is: Everybody believes in public education; everybody believes in K-12 education. I used to have this 100-year-old woman who was our neighbor. She said when she went to high school there were only 10 years and then she was out. It wasn't 12. So, I think we have to add a 13th and 14th grade, and it can be taught at junior colleges, and it's just part of the public education, and it focuses on job skills of one kind or another. Just call it the 13th grade and the 14th grade. Then you get out of the 14th grade, and you're equipped all of a sudden.

Well we have come upon the end of our time together. It's been... we could go another hour, Irving. It's been a delight chatting with you. You're clearly a very thoughtful person who has spent half a century mulling these issues, and I thank you for taking the time to share your conclusions with us.

Thank you, Byron. It was a pleasure talking to you. And, Byron, I hope that maybe 30 years from now, you could have such a conversation with a robot instead of just Irving. That would be very interesting to see if we can have such a widespread conversation like we had, which I found fascinating, with something other than two humans.

Well, tune in in 30 years in 2048 and we will have our first robot guest on Voices in AI. Thank you very much everybody. Bye-bye.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.