Episode 52: A Conversation with Rao Kambhampati

In this episode Byron and Subbarao discuss creativity, military AI, jobs and more.

:: ::

Guest

Subbarao Kambhampati is a professor at ASU with teaching and research interests in Artificial Intelligence. Serving as the president ofAAAI.

Transcript

Byron Reese: This is Voices in AI, brought to you by GigaOm. I'm Byron Reese. Today my guest is Rao Kambhampati. He has spent the last quarter-century at Arizona State University, where he researches AI. In fact, he's been involved in artificial intelligence research for thirty years. He's also the President of the AAAI, the Association for the Advancement of Artificial Intelligence. He holds a Ph.D.in computer science from the University of Maryland, College Park. Welcome to the show, Rao.

Rao Kambhampati: Thank you, thank you for having me.

I always like to start with the same basic question, which is, what is artificial intelligence? And so far, no two people have given me the same answer. So you've been in this for a long time, so what is artificial intelligence?

Well, I guess the textbook definition is, artificial intelligence is the quest to make machines show behavior, that when shown by humans would be considered a sign of intelligence. So intelligent behavior, of course, that right away begs the question, what is intelligence? And you know, one of the reasons we don't agree on the definitions of AI is partly because we all have very different notions of what intelligence is. This much is for sure; intelligence is quite multi-faceted. You know we have the perceptual intelligence—the ability to see the world, you know the ability to manipulate the world physically—and then we have social, emotional intelligence, and of course you have cognitive intelligence. And pretty much any of these aspects of intelligent behavior, when a computer can show those, we would consider that it is showing artificial intelligence. So that's basically the practical definition I use.

But to say, “while there are different kinds of intelligences, therefore, you can't define it,” is akin to saying there are different kinds of cars, therefore, we can't define what a car is. I mean that's very unsatisfying. I mean, isn't there, this word ‘intelligent’ has to mean something?

I guess there are very formal definitions. For example, you can essentially consider an artificial agent, working in some sort of environment, and the real question is, how does it improve its long-term reward that it gets from the environment, while it's behaving in that environment? And whatever it does to increase its long-term reward is seen, essentially as—I mean the more reward it's able to get in the environment, the more important it is. I think that is the sort of definition that we use in introductory AI sorts of courses, and we talk about these notions of rational agency, and how rational agents try to optimize their long-term reward. But that sort of gets into more technical definitions. So when I talk to people, especially outside of computer science, I appeal to their intuitions of what intelligence is, and to the extent we have disagreements there, that sort of seeps into the definitions of AI.

Well, I’ll only ask you one more question about it. So the simplest idea of intelligence are things that respond to their environment, and, to use your litmus test of, does it do something that a person would've done before, that would imply that my sprinkler that comes on when my lawn is dry is intelligent. It's responding to a situation, and it's something I would have done. But it sounded like the definition you just offered had a very interesting additional aspect—it learns and gets better over time. And do you think that's kind of the…?

Well I think that, of course, the components that intelligent behavior winds up having is that it’s able to perceive the environment—so, actually get useful information out of it—reason about the state of the environment, and so sort of make plans of action to act in the environment to increase its long-term reward. And while it is doing so, it is possibly constructing some internal models of the environment, and that is something that it's learning while it's experiencing the environment. So, the thing is that it's not useful to think of intelligence as a binary quantity. Even as lawn sprinklers are a sensor, [they] could be seen as a really little intelligent. So it is about sensing their environment and doing something. The real question is how deterministic that behavior is, and how adaptive that behavior can be. So it really is the question of spectrum.

I guess this is as good a time as any, to share my favorite joke of artificial intelligence, which is: a physicist, a mathematician, an engineer, and an AI guy get together, and they're talking about what's the greatest invention of human civilization. And the mathematician says, the concept of zero, and they agree with each other that it's a great idea. The engineer says the wheel, and again they agree with each other and say that's a good idea. And when it comes to the AI guy’s turn, the AI guy says the thermos flask. And everybody's surprised, “Why?” And he essentially says, well, you know you pour hot liquid into the thermos flask, it keeps it hot. And if you pour cold liquid into the thermos flask, it keeps it cold. And of course the punchline is, “How does it know?”

So, to some extent, what is intelligent behavior is partly in the beholder’s eyes. So we do get into this conundrum such as, is a sprinkler somewhat intelligent? Is a thermostat intelligent? But I think it's better to point [to] that question and say, that really they are a little intelligent. But you really want to talk about the continuum, where you are able to construct internal models and provide your adaptive behavior to do well in the environment. And that makes you more intelligent, as again as intelligence being a binary quantity.

And do you think it's artificial, in the sense that it's not really intelligence?

Actually, this is a very interesting point. Let's first talk about why the field got to be called ‘artificial intelligence.’

Yeah, that was McCarthy in '56 wasn’t it?

Yes, but there is a little bit more interesting background to it. So, by the time the Dartmouth conference was held in 1956, Norbert Wiener was already working on very similar directions, and he called it ‘cybernetics.’ And McCarthy couldn't, apparently, stand Norbert Wiener, who had his own interesting personality peculiarities, and so he essentially decided to call whatever he was doing anything other than cybernetics, and so he called it artificial intelligence. So that's actually sort of the background historical tidbit, but I think what he meant by the word artificial is, essentially we already have a sense of what biological intelligence is. Humans and some of the animals have these “patterns” of intelligent behavior. Can we make machines that recreate [and] show those sorts of behavior? So that's the part for ‘artificial.’

But of course, historically it has been also viewed almost as artificial means “not real.” So, in fact, I had to endure my colleagues taunting me saying, "Are you working on artificial intelligence, because you don't have the natural one?" The real version of the word/adjective ‘artificial’ there, is we are building machines that show intelligent behavior. As I think somebody, I think Goddard or somebody said, we already have a pretty foolproof way of developing natural intelligence that you learn in the junior high. And so the real question is, can we actually create machines that show intelligent behavior? And that's where the word artificial comes from there.

And it should be noted that he later regretted coining the term because he set the bar way too high from what he was originally thinking of. So tell me this, let me take a different tack. Is it possible for a computer to experience the world? Or, put another way, you put a sensor in a computer that can detect heat, and then you write a little program that says, when it gets five hundred degrees, play this sound file of a person going, "Ouch!" And then you hold the match to it, the computer says "Ouch!" Well, we know that's something different than when we burn our finger. Will a computer ever be able to experience the world? Because couldn't one argue that it's the experiencing of life that actually gives us our intelligence? And that unless you can experience life, you don't really have a way to grow and develop?

Yes! That again is a really interesting, loaded question that people talk about, especially people outside of technical AI, and this sort of presupposes that human experience is somehow sort of, “one size fits all,” and universal. It's not at all clear that that is the case. We probably fall within some reasonable range of experiences when there is heat shown. For example, we folks in Arizona, basically, don't particularly think 105 degrees Fahrenheit is a bad temperature, but I'm sure people in other areas on the east coast might be thinking 105 is the end of the world.

There are also things like, these people with synesthesia, which is where people start experiencing when they see a color, they experience a taste and so on and so forth. So it's not even clear exactly what it means to say “experience.” Since people themselves differ in how they, “experience” these phenomena, it's not too surprising if in fact the machines have a very different way of [computing] experience. So going back to your example of the machine sensing that it's five hundred degrees and switching on an audio file that says, "Oh my god, it's too hot," this sort of goes back to this whole Chinese Room puzzle. Which is, can we tell whether, when you have done this thing, can you tell, that, in fact, you have had the same exact experience as I had? And my argument is that even humans, two random humans, might experience the world in very different ways.

Well, let me interject right there for a second. That's a different thing, to say humans experience the world differently, is a very different thing than saying, “There is a thing called experience. Can a computer do it?”

Yeah, but it's a question of, do you want to focus on measurable things, versus do you want to focus on what's unmeasurable? That's very subjective. And you know, to me, I think the issue of AI has always been, not necessarily to make entities that will have exactly the human experience—although of course, that's the stuff that Hollywood movies and TV series have been made of—but really to get behavior out of them that is similar to what an intelligent human being will show in those scenarios.

So, I think these are octagonal questions, to some extent, because if you are doing the right response for the right environment or conditions, then, as far as AI is concerned, that's good enough, because we are actually able to show interesting, intelligent behavior.

I do want to actually mention, that one interesting issue, and maybe we'll get into this later on, is that humans, since we have lived with each other a long time, we have what I call the notions of “example closure.” So if you do something right in one particular circumstance, I assume that you have very similar capabilities that other humans doing that task will have. That is based on, essentially, our experience with other humans and that can come and haunt us when we are dealing with machines showing intelligent behavior. For example in perception, neural networks—which can kind of, very clearly differentiate between thousands of categories of pictures—can still make errors, which just do not make sense to us whatsoever. If you add a little noise to the picture, which is imperceptible to our eyes, they can just completely change the classification with one hundred percent accuracy to a completely different category. For example, [a] school bus becoming an ostrich.[But] we essentially assume that, if I see one particular behavior, then there's closure. Any human showing that behavior will also show all these other behaviors, and I have a sense of your failure modes, but we don't have that for machines showing intelligence behavior.

Well, I would say though, what it sounded like you were saying, is [that] the question of, “Can a computer experience something?” is not even really measurable, and what we're really interested in is what the behavior of the system exhibits is. But I would say there are two ways I would challenge that. So the first is, you mentioned that we have a way that we acquire intelligence, in junior high, and the way that we acquire it. You know the roboticist Rodney Brooks has this concept called “the juice” where he says, you put an animal in, trap it in a box, and it struggles to get out, it wants to be free. You can see it's frantic, and it's trying to figure it out, yet you cannot capture that for some reason in a robot. You can't find a robot, and the robot lacks the juice. Now to be clear, he says “the juice” is something purely biological, and we'll figure it out and all that. There's nothing magical about it. But, isn't the idea that a learning system needs “the juice” to be able to…?

I think that's an additional layer of the hypothesis. Again, the goal is to be able to show intelligent behavior. We have, of course, this very compelling example of intelligent behavior that we understand, which is humans showing it. So there is this natural tendency to assume that the only way to get to intelligent behavior is to do it the way humans seem to be acquiring it. From the very dawn of the field, essentially, there have been these questions as to whether, in fact, AI is only possible by understanding how human intelligence works, or whether there is a parallel way of getting to intelligent behavior without having to completely understand, for example, neuroscience, and how human intelligence works. And the jury is basically still out on this. There's sort of a pendulum swing.

In the beginning, AI and neuroscience used to be somewhat closer. In the middle, they sort of completely separated out with the sort of ideas like, "Planes don't flap their wings, so why should machines have to experience the world the same way humans have to before they can show intelligent behavior?” More recently, of course, there's a lot more of an interest in the connections to neuroscience, especially with the work that's happened in neural networks. Even though it's not completely based in neuroscience, at least there's a lot more of an interest in looking at neuroscience approaches to intelligent behavior.

So, what I'm trying to differentiate is the goal versus the path to the goal. The goal is to show intelligent behavior. It's not to make pseudo-human beings. So to me, for example, the HAL in 2001: A Space Odyssey, which is, essentially, a disembodied, or multi-bodied entity, is an intelligent entity. It doesn't necessarily have a single body, and it may not necessarily feel the same kind of experiences that a single body creature like humans will feel. But there was no question that at least what was being imagined there, [was] an entity capable of intelligent behavior. So I think I'm differentiating between the goal and the paths to the goal. We don't yet know what is the only path, whether there is only a single path that works to get to artificial intelligent behavior, or are there multiple paths? We just don't know that.

So you made the statement that the goal is not to make pseudo-humans, but there are[a] number of companies that believe that they can make companions for the elderly. Even the desktop devices like Amazon Alexa, and whatnot, use human voices and they even give them human names. There's a whole lot of people that want to build pseudo-humans, aren't there?

I think this is a very fascinating discussion. I think we're getting into interesting distinctions. When I said the goal is not pseudo-humans, [I mean] the goal is not to reach intelligent behavior by making only entities that learn about the world exactly the way humans do. In fact, [there is] a very, very relevant question on that aspect. For example, humans are emotional beings, and we cannot switch [emotions] off. We basically experience the world with emotional responses, and we also make critiques of each other saying, "Oh my god, you're too emotional, you're not being rational," as if that's a bad thing. We don't quite understand whether emotional responses are computationally a good thing or a bad thing in reaching intelligent behavior. But, having said that, if you are making an Alexa or any system that has to interact with humans, it [had] better at least fake emotions. It better, at least, fake social intelligence, because we anthropomorphize everything that we interact with, and we expect them to have the same sort of responses, especially the emotional responses, that we expect from other humans. So, again, this is a deeper distinction.

Does an artificial entity need emotions just to survive and do well in the environment that, let's say, doesn't have any other humans? So suppose if you have a rover working on Mars, do they need emotions? Even if we go to Mars, we'll still have emotions. Do these machines need them when they're on Mars? That's not obvious. But do they need emotional responses when they are interacting with us? That's completely obvious, yes. In fact, the history of AI is littered with grand projects including, for example, the sort of notorious Microsoft paper clip, in Microsoft Office that I'm sure you're familiar with—which was this intelligent assistant that [tried] to help you deal with the problems you encountered when you were working with Office software, Word, and PowerPoint and so on. And there was pretty impressive AI technology behind it, and yet, when Bill Gates finally retired it, saying “We're going to retire paper clip,” there was spontaneous applause; everybody hated the paper clip.

And there are people who say part of the reason for that is that the paperclip assistant didn't show appropriate emotional responses. It was probably doing a good job in solving your problem, but it wasn't showing proper emotional responses. It was perpetually chirpy, it was perpetually smug-looking, and so it made people feel that they didn't want to interact with it. Now, this is a very interesting secondary issue. There's nothing that says the paperclip assistant was not intelligent. It was just not effective in interacting with humans, and, to interact with humans you wind up needing social intelligence and emotional intelligence. At that point, you will at least fake these. You will just have to fake these. Even if they're not computational, they are ideally required to show intelligent behavior. I don't know if that makes sense, but to me, that's the second difference that winds up being important.

So let's go down that path just a little bit. Just to set something up for the listeners, there was a program people have probably heard [about] in the ‘60s by a man named Weizenbaum called ELIZA. ELIZA was a Q&A agent that you would say, "I'm having a bad day," and it would say, "Why are you having a bad day?" Then you'd say, "I'm having a bad day because of my brother." It's a lot like a fourth grader or something. And what happened is, Weizenbaum got really disturbed, because even people who knew this was a program, we're developing an emotional attachment to it, and he ended up, kind of, unplugging it, erasing it, and turning kind of against artificial intelligence. He made a really interesting distinction [regarding] the distinction between citing and choosing. He said that citing something was computational, and a computer can do that, but choice is a product of human judgement and that you never should actually have computers faking emotions, because when a computer says "I understand," it's a lie, because there's no “I” and there's no “understanding.” So, Weizenbaum would push back really hard on what you were just saying because you're saying "Oh, we have to get them to fake emotions." Where do you think Weizenbaum [is] wrong, because he says it's morally challenging for a robot to fake emotions?

So I think this is a fascinating question, I mean, I think the ELIZA program and the consequences that Weizenbaum saw—I think he saw his secretaries pouring their hearts out to this very simplistic pattern matching program that he knew was not doing any psychoanalysis. Yet, people were pouring their hearts out and so, first of all, I could imagine why he shut that system off, and secondly also, why he took these model stems that the computer should not show behaviors that people will consider like emotional responses. The problem is [that] it's us humans who anthropomorphize everything. In fact, a couple of my colleagues actually say that we should ban humanoid robots, that is robots that are in human shape because people are likely to believe that they have other human qualities. Remember I talked about this “example closure,” we came to assume that when an entity has either shapes or behaviors that remind us of some human behaviors, which all the time happens, then we assume it has all those other properties that humans tend to have. So, my colleagues suggest we should ban humanoid robots. But the problem is if you see something like the movie Her, you realize that we don't actually need a shape. All we need is a voice, and then we will be able to let our imaginations go wild and imagine there's a person behind it. So the first thing is, that really it's the human tendency to anthropomorphize, rather than the machines trying to necessarily inject some sort of unwanted influence on you, that is an issue.

Second, Joe Weizenbaum notwithstanding, if you want interaction with machines, there is no other way than them having to show proper emotional responses because eventually, we get tired of dealing with machines that don't have proper emotional responses and proper social intelligence aspects. There is really no way out of that quandary, essentially. So if you take a huge model stance that we should never show something that you “don't feel,” [well] we already talked about [the] fact that it's impossible to quantify and measure whether this notion of “feel” and “experience” across human versus machines. That's just impossible already. Secondly, without this sort of emotional and social intelligence responses, we won't interact with machines. Then we will essentially come to a situation where machines are on their own, probably on the mass and then we will hang around here, and if that's the only kind of applications we're allowing, I can imagine, that we can get by. But, clearly, there [are]a lot more useful technological advances we make if we can have these machines interact with humans in useful ways, and when you get there, there's just no way out of disagreeing with the dictum that Joe Weizenbaum put out there, as you mentioned.

Let me engage that for just a minute.[You said] we'll get tired of engaging with these machines if they don't react with simulated emotions. Is that what I heard you say?

Again, I think the real question is, when I am talking to you, I can't tell whether you're stimulating your emotions or not, I just expect certain emotional responses. Humans themselves can fake emotional responses when they're talking to each other, but still that shows. If someone is telling you a heart-wrenching story, and you start laughing in the middle of that story, that will be the end of the conversation. And even if you feel like laughing in the back of your head as this story is going on, if you show them the proper emotional response, the story will continue. So, again we do this faking of emotional responses sometimes ourselves, and I think without the right social responses, basic interaction becomes impossible.

So, let's go through some different kinds of intelligences that we interact with. I have a calculator on my phone, and when I type "what is sixty-four times one hundred twenty-eight," it doesn't go, "Whoa, that's a hard one!" I have a thermometer that learns. I have the Nest Thermometer, and when I turn it down [it doesn’t say], "Oh, you must be cold, is that why you're turning the heat up?" An assembly line is a kind of artificial intelligence. It replaces a way we used to build things; it's a much more intelligent way to do that. In fact, we don't use any machinery today that gives us emotional feedback, none of it, so the idea somehow that we won't use it if it does not give us emotional feedback—

I think the fact that we currently don't use [such machinery] can either be seen as saying, we don't need machines to have emotional responses, or can be seen as the failure of having properly effective human-machine interfaces. So if you look at, for example, the whole entire area of human-aware artificial intelligence, human-robot interaction within the AI community and so on, you will see that the prevalent wisdom, is that when you have longitudinal interactions—so for example when you're asking your calculator, "What is seven times nine?" or something—it's just one-off interactions. If you have things like, humans and machines, human-robot interaction, human-AI interaction [for] a longer period of time, when there's collaborative problem-solving involved—those don't exist right now. Part of the reason they don't exist is [that] we haven't actually figured out how to get machines to show the sorts of behaviors, the side behaviors, that make fluid teaming possible, and enjoyable for the humans in the loop. So, again I think I will still say that these are things that we expect when we are working with other humans, and we are not going to be able to switch off when we are working with machines. That sort of pattern has been the reason[that] there have been some pretty very high-profile cases of AI agents not being good intelligent personal assistants, for example. So if you consider a calculator to be an intelligent personal assistant, then yes it doesn’t have to basically say, "Oh my god, that's a hard one,” or something, but on the other hand if you are talking about more elaborate forms of personal assistant technology that people are looking for, where essentially, you can have continued conversations with your Alexa, you would definitely wind up expecting.

Well, let me ask just one more question along those lines. So, right now I call my airline, and it says, "We have an automatic system. Where would you like to travel to?" and I say "Paris," and it says, "When would you like to go?" Now, let's say, in the future they get a much better one, and I think it's a person—an operator “X,” Jill, to give her a name—who says "Where would you like to go?" and I say "Paris," and she's like, "Oh, Paris, why are you going to Paris?" I'd say, "Oh, it's this vacation, I've always wanted to go to Paris. I've always wanted to see the Eiffel Tower," and she's like "The Eiffel Tower, oh my gosh, that is the coolest thing isn't it?" At some point, if I ever realized that this is not a real person, I'm going to be really annoyed, because it's like, I just wasted my time having to deal with these simulated emotions. I have those interactions because it's a human. But to say that I want to have those with this automatic system when all I want is to know how much the ticket is going to cost me, I think it's one-hundred-eighty degrees off actually. I don't think we necessarily want the burden...

Again, I respect that position, but I think I would argue that pretty much that's exactly the kind of thing that we do with each other. Earlier I was saying, suppose your friend is telling you a heartrending story and you didn't actually feel that, that particular story is particularly sad, but you sort of fake your emotion and you show a sad expression, etcetera, etcetera. If you don't show, the conversation stops right there. But if you did show, and over a period of time eventually your friend figures out that you didn't particularly feel the pain that they were describing, and [for] whatever reason, you were just faking it, they will lose trust in you. So this issue of long-term trust is a very complex thing, people are talking a lot about it right now. We don't completely understand how trust develops in human societies, and we now have to talk about how to understand that in the context of human-machine society, this hybrid society of humans and machines.

So, I completely understand your point, that in [the] long term, having shown [appropriate] emotional responses, if at some point [in]time, you make a false start, people might suddenly think, “oh my god, all this time we spent, was actually completely lost,” and then lose trust in interacting with that machine. But at the same time, if it doesn't show proper emotional responses, there is also enough data out there showing that people actually would not interact with it. So you lose trust from the beginning itself. I mean there is a reason why we have all these websites on the web on how to get a human on an 800 number, right? Every possible company has these automated voice assistants, and clearly, we are not happy with them, and that's why we basically now have all these ways of getting out of them, and try to talk to humans. I think it's partly because the effectiveness of the interaction depends to some extent on having these sort of proper emotional responses, even if they are fake.

Well I'll just ask one more question, because you've obviously put a lot of thought into this, but I still don't really understand it, because it sounds like you're actually making my point, because you say, the following: “When you're talking to a person, and they're telling you a sad story, you must show the appropriate emotional reaction. If, however, they know, if you let a chuckle out by mistake and they realize it, it shuts the thing down.” So what people want are honest emotions, and what they don't want are fake emotions. A machine by definition can only give you fake emotion. So I know the minute that woman says, “Oh my god I wish I could go to Paris,” I know that's a lie. I know it's fake, and I know she just doesn't even exist, so the whole system is predicated on dishonesty.

No, but again, I think we're using all these loaded terms. So I would also argue that you are making my point. The very fact that people, very intelligent people at MIT were pouring their hearts out to this pretty primitive program called ELIZA, shows that we are not particularly good at deciding what are real versus fake emotions. We do have the ability to see whether there's an improper emotional response in the short term, and that will shut that conversation [down] right there. So non-sequiturs, laughing when you're telling a sad story, crying when you're telling a joke, these are all things that would just shut the conversation down right then and there. So that question then becomes, am I going to at all support effective interaction? I don't want to ever take the chance that, on a long, long-term basis we may come to a point, where, essentially there would be some false move on the other agent’s part, and then the human in the loop completely shuts off there.

I would say that a practical argument can be made, that at least you had this much interaction before, whereas you would've stopped talking from the very beginning in fact, if you had non-sequiturs and improper responses in the very beginning. Again, this is coming to the very first point we were talking about, which is, are we talking about the endpoint—of the performance of the behavior of the agent—in terms of either its own performance in the environment part or in terms of the effectiveness of teaming and collaborative problem solving between it and the human? Or are we talking about [how] we must do it only in the right way, in the sense that the way humans will do it, and if it’s not human then it shouldn't even be tried? If we get into that second part, then the entire direction of AI, essentially, will be sort of religiously forbidden by people who assume that only humans feel human emotion, and only humans feel human experience, and so, eventually nothing else should even be designed that can show pieces of this behavior.

To change the topic to something slightly different, so there's this movie I, Robotwith Will Smith, and he doesn't like the robot Sonny, or he doesn't like the robots in general, especially Sonny. There's a part where he says, “Can a robot write a symphony? Can a robot paint a great painting?” Sonny actually has a pretty good zinger, he says, "Can you?" But, aside from that, do you think an artificial intelligence is going to master human creativity?

I think there is no argument I can put [forth], saying it's not possible. Let me put it that way. So, machines are already showing things that are a little bit creative. There are things that are a lot more creative that they're still not capable of, such as, for example, building an entire symphony, but to me, I see that as the goal is not yet reached, [instead of]the goal is impossible. We tend to romanticize things that we don't completely understand, and creativity is something that we don't completely understand. So we assume that it must be hard, and that, by definition, nobody else other than humans can do it. But we are getting, day in and out, little by little, examples of machines showing behaviors that will at least be seen as partly kind of creative. For example, I'm sure there are, I don't right away have the link right now, but there have been studies where they showed either the poems written by computer versus the poems written by humans, and then showed it to humans and [asked] "Can you tell the difference?" And there've been studies where they've shown, continuing studies for example, in vision community, where they essentially are showing sort of synthesized examples versus naturally made, and [they ask] can you tell the difference. Often, you can't actually tell the difference. And so, ultimately, I can't argue that creativity is something that we can't automate. I can only say that it's probably harder than some of the other things, and it might take a longer time. So if I, for example, have a job which doesn't involve too much creativity, I would be very worried. I would basically expect that the robots will take [that job] away very soon. Whereas, if I have [a] jobs that require[s] a lot more creativity—at least the way we define creativity right now—I think I'm somewhat safer for quite a little longer time.

So, I distinguish between this business of “not possible at all,” versus “harder,” and again, I see this from the point of view, [as] you mentioned in the beginning, that I've been in the community for quite a long time. As a grad student, all I read about AI from people outside of AI, was how it's not even possible in some sense. They had parts of that discussion up until today. So there are all there physicists like Roger Penrose, who would write all these books saying, "Stop doing what you're doing, because here is my proof that it's not possible," that [the] strong AI hypothesis that machines can actually show intelligent behavior that would sort of be seen as human-level AI, is just not possible. The pendulum has swung, right now, to the other extreme, where essentially not only do we apparently believe that it's possible, but we believe that they can go super intelligent and that they will take over the world. And so, within my own career, I saw the pendulum swinging widely from one extreme to another extreme. I still think that the state of affairs that I think is reasonable is somewhere in between. I think, for example, creativity, there is nothing I know of that says, what we consider as creative, in these words are impossible to automate. It would be hard to automate, but it's not impossible. So, I wouldn't be surprised if one day, a computer writes a symphony that we enjoy. I think that there are already excellent examples of that going on right now. Typically, after people enjoyed it, if you tell them a computer wrote it, then they change their mind and say, "Oh no, it obviously was not a great symphony." But the real question is, could you tell beforehand? Little pieces of the Turing test of creativity are being won by machines right now, so that's just the reality.

Well, let's explore that, this question of AGI, Penrose and his hypotheses. Let's talk about human intelligence for a minute. Why do you think we don't understand how our own brains work?

Why is an interesting question. I don't know how [to] answer why we don't understand how our brains work, but I do know that it [has] had a pretty profound impact on the way AI has progressed. So in fact, I sort of do this in my conversations with people outside of AI if you look at the way human babies sort of acquire different aspects of intelligent behavior, you'll see that they first understand how to see the world—little kids are able to see the world, sort of, identify objects, and so on. They have this beautiful ability to actually manipulate little, little objects, to put them in their mouth. They get emotional intelligence, they get social intelligence, just even when they're going to grade school, and only much later they get into what we consider cognitive intelligence, the kind of things like chess, [a] game of Go, and so on. And that's the way that human intelligence seems to be “acquired,” if you see the stages. If you look at the history of AI, AI progressed exactly in the opposite direction. We were essentially beating the world chess champions, and trying to automate expert reasoning way before we could actually recognize what a chess piece is visually or how to, or to, even now, how to manipulate a chess piece with the dexterity that humans have. Part of the reason that happened, is because we automated the parts that we have a sense of the theory of how we do it, and to some extent, cognitive tasks like chess, or Go, or how to do expert reasoning in law and so on, we have some theories, with exceptions, but theories of how to do them, and we have zero theories about how we do vision. Whatever theories we had of how we do vision, were not strong enough for us to get enough of an effective behavior out of AI machines, and so, eventually, the way we made progress there, was completely by learning, which is just loading some data into the machine, and trying to see if it can learn the patterns. I think that deep learning has been most effective in capturing that.

But that sort of explains that there are many things that we don't understand, there's this famous Polanyi’s paradox that talks about the fact that, there's a lot more that we understand tacitly—a lot more knowledge that we have that's tacit—than something that we can verbalize, and if we can't verbalize, we can't program the computer. So in the beginning, essentially, we mostly we went with non-learning based approaches. Now this pendulum has swung completely to the other end, we basically just show examples and expect the machine to learn—which works extremely well for vision because that's pretty much the way we acquire vision, and [how] we also acquired manipulation capabilities. I think I can't answer that question as to why we don't understand how our brain works, but it did have a profound impact on the way that the progress in AI happened, and it also has some sort of cautionary tales. So, one of the things that I keep thinking of is this whole data versus doctrine tradeoff. Right now the pendulum has swung to the point where, people who look at AI, who are sort of newcomers more or less, think that all of AI is just learning from massive amounts of data, but that's not at all reality, if you look at, let’s say, human intelligent behavior, because we bring to bear tremendous amounts of background knowledge to learn from very few examples. [For] machines, currently, we don't yet have that technology, and clearly, we need to have a way of connecting doctrine, which is our background knowledge, with data to get to the next level.

I'm thinking about your phrase that you have, “no argument against why we can't build an AGI,” and I guess the corollary of it would go like this: So, we don't know how our brains work, and it isn't just because our brains have a hundred billion neurons, because there's a nematode worm which has 302 neurons. We’ve spent twenty years trying to model that into a machine, and we don't even know if that's possible. So, even [with] 302 neurons [of] intelligence, we don't even know how a neuron works well enough, to duplicate that. Then, that just gets you a working brain. Then, you have the mind, and the mind, of course, is all of the stuff the brain does that doesn't seem to be derivable from the attributes of a neuron, some kinds of emergent things. Then, once you have a brain, then you have a mind, then we have consciousness, which is something that everybody agrees what it is, but nobody even knows how to answer the question, how is it that matter can have an experience of itself? So why is it your statement, “I had absolutely no reason to believe we could ever build an AGI,” given that, the only intelligent thing we know of in the universe, we have no idea how it's intelligent?

Okay, so let me again say that we are getting back into this point that I made earlier, that we have to distinguish between goals and paths to the goals. I think much of what you're saying presupposes that the only way to reach a certain goal is to sort of understand how the entities that have that ability to show that intelligent behavior in our case, how they do it. It is not obvious. I think neuroscience is not a prerequisite for AI, in any theoretical sense. It might actually inspire a lot of interesting approaches, but it's not a necessary prerequisite for progress in AI. The second thing I want to sort of clarify pretty strongly is, all I said was I don't have a strong argument as to why computers can't show behaviors or design artifacts that you would say are signs of creativity. Again, AGI is a much bigger concept. I'm not even sure it is actually well understood what AGI is. Within the technical AI field honestly, people more or less assume that anything which says AGI, is a community full of crackpots, and so we essentially avoided using that terminology most of the time.

So, all I was saying is, when you were talking about, “do you think that computers will ever be able to show creative behavior such as, in symphonies, or like in stories, or like in poems,”, that's a lot more specific to me than whatever is AGI, and that creativity—I don't see any reason why it won't be automatable. As to how it will get automated? Will it be only after we understand a neural basis for human intelligence, and we are able to put it to machines, versus will there be other approaches, that are almost parallel, versus, it could be a hybrid approach? That's not obvious to me. All I'm saying is that I don't see any theoretical reason, which shows me that the entire endeavor is guaranteed to fail. I don't know whether that explains the question. So, the path is a different issue from whether a goal is even reachable. And I believe that the goal of showing creative behavior—I mean I have no reason to believe it is artificial because we already are making progress towards [that].

You know it's interesting because Elon Musk famously is worried about an AGI and he's worried about it very soon, just a matter of a few years. I think Mark Cuban said he kind of was in that same can. Wozniak was worried about it and thinks it would be “soon-ish,” and then you have people on the other extreme, who think it's centuries away. Andrew Ng famously said, "it's like worrying about overpopulation on Mars," and Zuckerberg says, "It's nothing you should worry about, and it's far off." Why do you think these people have such different opinions about its character and more interestingly, the timing of it?

Yeah, so first of all, as I say, predicting the future is a hard business, and I don't have any particular reason to believe, as impressive as Elon Musk and his abilities have been in all sorts of different fields, I have no reasons to believe that he has the technical expertise and future-reading capabilities that other people are lacking. There is no reason to believe that super intelligence is not a farfetched worry for AI technologies. There are other things that we do have to think about, about AI technologies, such as safety, criticality, and so on, and also, even things like, impacts on the society of the currently effective AI technologies. But when somebody like Elon Musk brings up super intelligence and brings up regulation of AI research, this to me, is, unfortunately quite annoying, as a technical AI researcher, because it's not even clear what the basis for his worries are.

As you mentioned earlier, I'm the President of the Association for the Advancement of Artificial Intelligence, which is the scientific society in AI, and we have polled the fellows, elected fellows of triple AI, as to what they believe as to different thresholds for a human level intelligence are and when do they expect we might be able to achieve them. Clearly guessing the future is hard, but, presumably people in the area might have some sense as to whether or not they are closer versus farther, and you know, we found the same sort of spread, but it's much larger, [and] much, much farther away in time. We, for example, felt the median was something like fifty to seventy-five years, with some people thinking it's much, much longer. But this is for human-level intelligence aspects.

I think, clearly, nobody completely can read the future, but I think there are important impacts of AI technologies and their use currently in society that we should be thinking about, and to some extent, the fascination with the superintelligence, and the AGI, and the robots taking over the world, tend to be a huge distraction. I respect the fact that a few people should be thinking about it, but, what really winds up happening, is that anytime Elon Musk or Stephen Hawking speaks, you'll see that the entire news feed is full of robots taking over. [A] couple of days back, I think [on] Facebook, there was this whole story which turned out to be completely false, about Facebook having to shut off its chatbots, because they developed, supposedly, some secret language. So, it sort of puts people on the edge, and I don't see any technical basis for that kind of fearmongering to some extent. I'm not necessarily sure what are the motivations Mr. Musk has, he may very well believe it himself, for his own reasons, but I just, as a researcher in AI, as well as generally, sort of a member of the technical research community, I find that mostly, they are a distraction.

What do you think about the European efforts to require a “right-to-know?” If an AI declines your mortgage application, you're entitled to an explanation as to why. One, is that possible? Isn't that akin to saying, “Google, why did this come at number four for this search?” Two, is that a good idea?

So actually, first of all, this is the GDPR I guess you're talking about, the General Data Protection Regulations. First of all, that's way more relevant and important a problem to think about right now, given that AI technology's sort of being used everywhere right now. There are obviously some interesting technical questions about what does it mean for the machine to give an explanation, and when do people see that they're being given an appropriate explanation because partly, there's this whole psychological literature about the fact that explanations are sometimes reconstructed. So, you get to do what you want to do, and then you provide an explanation. The famous example of this is, at the end of the day, every day after Wall Street closes, experts provide an explanation as to what happened, as if they knew this was going to happen, all along. But clearly, it was a reconstructed explanation because if they really knew what was going to happen, they could have been a billion billionaires beforehand. This is, again, the need, sort of just like the emotional responses, this is the need that humans have for explanations. We never just trust that authority in saying, “Just assume that I know right, I know what I'm doing is right, and just believe, trust in me.” that's not something that liberal philosophies are particularly comfortable with, so we always have made all of our jurisprudence based on these sorts of explanations.

So I think it's very reasonable for governments and societies to [hold] decisions made by automated entities, sort of more accountable, and one way [to do so] is to start thinking about what it means for them to provide explanations. It's a very reasonable worry because, sometimes, as many people have pointed out, [what] you learn from the data of the current society, that may not necessarily be the right basis to make judgments about how the future society should be. And this becomes a big issue because, for example, my classic example is, if I ask Google, “Show me a picture of a professor,” in the first hundred pictures there are maybe two, maybe a few handfuls of non-white, non-male pictures. Now, if I just learn from the data, then I assume that professors must be just white males or at least male. That may well be true of the way the current society currently is, but I think there is no disagreement, that society doesn't think that that's a great place to stagnate in.

When this kind of data is being used by machines to learn patterns and make decisions, that can get into all sorts of interesting counterproductive directions. ProPublica, last year, did some interesting journalistic studies on, for example, how, using data to predict recidivism rates has assured that it was inherently biased for a variety of reasons. So in fact what ProPublica did was sort of a good contribution because they were essentially making us realize that you can't just assume that, just because you use loads of data to reach a position, that it's always the right decision to get to—especially where relations affect individual liberties and individual happiness and so on. So, I'm overall in favor of the directions like GDPR, a lot more in favor of them than in favor of these worries like, “let's regulate AI research because God knows, super intelligence is going to happen tomorrow.” So, I think they are overall in the right direction, and I think we still have to figure out some things there—as to what it means to provide understandable explanations—but, those are technical challenges, and I think they're worth looking into.

And what do you think about the debate whether or not to use artificial intelligence to make kill decisions in the military application?

That's a very interesting point. I mean this has been a wide divisive issue within the AI research community itself. I think my personal opinion, and I'm just speaking for myself, Rao Kambhampati, and not for anything that I represent, my personal opinion is that we already have large amounts of autonomy in the systems right now. I think if you saw that movie Eye in the Sky, for example, [where] drones [are] getting you into situations where people supposedly think they are making split-second decisions, and they are very quickly coming to situations where there is already enough autonomy. If you assume that all the intelligence has to just reside outside of the machine[and] that the humans have to make the decisions, [then] the speed at which some of these decisions have to be made will make the human presence in the loop mostly illusory because we don't operate at those speeds. So I basically think that it's the stupid autonomy that we should be worried about and I’m much less worried about adding intelligence to already autonomous systems that [the] military uses. Look, I'm a pacifist, I hope that there are no wars in the world, but I cannot argue that if we are going to have wars, we should make some specific artificial distinction about autonomous weapons versus non-autonomous weapons, because I find it hard to understand where the line is, as to, what is not autonomous in the weapons technology that we already currently have. So, I think it's an important discussion, a civil society discussion that I would like to see happen, but, personally, I don't see it as a binary distinction, and, you know, I think it makes a lot of sense, to add intelligence to the autonomy that already exists in many of these systems.

And then the final question I'd like to ask you is, as you know, there's a lot of fear wrapped up in the effect of automation on future jobs, and there's kind of three camps: One says that the machines are going to take all the jobs, yours, mine, every single one on the planet. Some people think they're only going to take some of the jobs, and we're going to have kind of a permanent great depression. Then, there's a third camp that says, on net, they're not going to take any because history has shown us that people use technologies even like electricity and machine power just to increase productivity, and make, to grow their own income. So I'm curious which of those three camps or a fourth one, do you find yourself in?

I'm firmly in the second camp, the middle ground. I do believe that certain jobs are going to be automated, you know, my tagline is that “mothers don't let their children to grow up to be radiologists,” because as great a job as radiology currently is, I just see not much hope of it being available, given the rapid advances in computer vision technology that are better at reading, for example, X-rays than humans in multiple narrow tasks already—

But to say they're going to eliminate some jobs, why do you say they're going to eliminate jobs "net?" They're not going to create as many as they—

What I was trying to say is there will be job elimination. The studies on this have shown, for example, that, anything routine whether it is cognitive or non-cognitive, is more [in]danger of being eliminated than anything that requires multiple competencies and capabilities in a day's work. So very low paid jobs currently, such as taking care of [the] sick and elderly during the day, actually are harder to automate, even though they are not very well-paying jobs. So, of course, there’s the usual creativity spectrum, writing symphonies etcetera, as we discussed earlier. At least right now, those jobs [or] anything that requires high levels of creativity is at least not in danger of being eliminated. My point about this whole direction is we do have to realize that jobs will be eliminated and society has to think about what sorts of re-education or re-training we will provide.

For example, trucking, as an industry is at a very serious juncture right now because of the self-driving car technology, and so we should be thinking about, what will happen to the huge number of people who are dependent on trucking for their livelihood. You can't just take the view that, well, it'll all work out and basically, there'll be new jobs created, because I'm not actually fully in agreement with the rosy view that for every job that is removed, there is a job that is created. This has been the case in the past, but it's not very clear that that's definitely going to happen, right now. So the jobs that may be left over or the jobs that may be coming in might require different kinds of competencies, and society should be worried about how to do appropriate kinds of re-training.

I actually found the Obama administration's OSTP, the Office of Science and Technology Policy did a very nice study on living with a future with AI. I forget the exact title they used, but they looked into a variety of impacts of artificial intelligence technologies, both in terms of how they can make our lives great, as well as some of the worrisome impacts, such as unemployment. And they put in place, at least, some beginnings of what the policy maker should be thinking about in terms of retraining opportunities and so on. I do think that that's the right direction to think about. I am not in either of the two extreme camps that everything will be gone, or Steve Mnuchin, for example, saying nothing will be gone, and everything will be just as fine as before. I think there would be job losses and there would be job losses that might be different from what we've been used to in general technological unemployment, and so it behooves us to think of how we deal with it. I think that's a lot more interesting, that and GDPR sorts of impacts of AI, are a lot more immediate and a lot more important things we should be focusing our time on, than, super intelligence takeover, and how do we regulate AI research, so that super intelligence won't happen. That's my view.

Alright well, it looks like we're out of time, but I want to thank you so much for a fascinating hour.

Thank you.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.