Episode 67: A Conversation with Amir Khosrowshahi

In this episode Byron and Amir talk about the explainability, privacy, and other implications of using AI for business.

:: ::

Guest

Amir Khosrowshahi is VP and CTO at Intel. He holds a Bachelor's Degree from Harvard in Physics and Math, a Master's Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm. I'm Byron Reese. Today I'm so excited that my guest is Amir Khosrowshahi. He is a VP and the CTO of AI products over at Intel. He holds a Bachelor's Degree from Harvard in Physics and Math, a Master's Degree from Harvard in Physics, and a PhD in Computational Neuroscience from UC Berkeley. Welcome to the show, Amir.

Amir Khosrowshahi: Thank you, thanks for having me.

I can't imagine someone better suited to talking about the kinds of things we talk about on this show, because you've got a PhD in Computational Neuroscience, so, start off by just telling us what is Computational Neuroscience?

So neuroscience is a field, the study of the brain, and it is mostly a biologically minded field, and of course there are aspects of the brain that are computational and there's aspects of the brain that are opening up the skull and peering inside and sticking needles into areas and doing all sorts of different kinds of experiments. Computational neuroscience is a combination of these two threads, the thread that there [are] computer science statistics and machine learning and mathematical aspects to intelligence, and then there's biology, where you are making an attempt to map equations from machine learning to what is actually going on in the brain.

I have a theory which I may not be qualified to have and you certainly are, and I would love to know your thoughts on it. I think it's very interesting that people are really good at getting trained with a sample size of one, like draw a made up alien you've never seen before and then I can show you a series of photographs, and even if that alien's upside down, underwater, behind a tree, whatever, you can spot it.

Further, I think it's very interesting that people are so good at transfer learning, I could give you two objects like a trout swimming in a river, and that same trout in a jar of formaldehyde in a laboratory and I could ask you a series of questions: Do they weigh the same, are they the same color, do they smell the same, are they the same temperature? And you would instantly know, and yet, likewise, if you were to ask me if hitting your thumb with a hammer hurts, and I would say "yes," and then somebody would say, "Well, have you ever done it?" And I'm like, "yeah," and they would say, "when?" And it's like, I don't really remember, I know I have. Somehow we take data and throw it out, and remember metadata and yet the fact a hammer hurts your thumb is stored in some little part of your brain that you could cut it out and somehow forget that. And so when I think of all of those things that seem so different than computers to me, I kind of have a sense that human intelligence doesn't really tell us anything about how to build artificial intelligence. What do you say?

Okay, those are very deep questions and actually each one of those items is a separate thread in the field of machine learning and artificial intelligence. There are lots of people working on things, so the first thing you mentioned I think, was one shot learning where you have, you see as something that's novel. From the first time you see it, you recognize it as something that's singular and you retain that knowledge to then identify if it occurs again—such as for a child it would be like a chair, for you it's potentially an alien. So, how do you learn from single examples?

That's an open problem in machine learning and is very actively studied because you want to be able to have a parsimonious strategy for learning and the current ways that—it's a good problem to have—the current ways that we're doing learning in, for example, online services that sort photos and recognize objects and images. It's very computationally wasteful and it's actually wasteful in usage of data. You have to see many examples of chairs to have an understanding of a chair, and it's actually not clear if you actually have an understanding of a chair, because the models that we have today for chairs, they do make mistakes. When you peer into where the mistakes were made, it seems like there the machine learning model doesn't actually have an understanding of a chair, it doesn't have a semantic understanding of a scene or of grammar, or of languages that are translated, and we're noticing these efficiencies and we're trying to address them.

You mentioned some other things, such as how do you transfer knowledge from one domain to the next. Humans are very good at generalizing. We see an example of something in one context, and it's amazing that you can extrapolate or transfer it to a completely different context. That's also something that we're working on quite actively, and we have some initial success in that we can take a statistical model that was trained on one set of data and then we can then apply to another set of data by using that previous experience as a warm start, and then moving away from that old domain to the new domain. This is also possible to do in continuous time.

Much of the things we experience in the real world—they're not stationary, and that's a statistics change with time. We need to have models that can also change. For a human it's easy to do that, it's very good at going from... it's good at handling non-stationary statistics, so we need to build that into our models, be cognizant of it, we're working on it. And then [for] other things you mentioned—that intuition is very difficult. It's potentially one of the most difficult things for us to translate from human intelligence to machines, and remembering things and having kind of a hazy idea of having done something bad to yourself with a hammer, that I'm not actually sure where that falls in into the various subdomains of machine learning.

So, your contention is no, those are hot topics in machine learning and they will likely inform it, but do we actually have evidence of that, like do we think that those problems are inherently solvable in computers?

So, this is a contentious topic. There's of course naysayers that say that we're not going to be able to do those things for quite a long time, and, there are the people who say we are addressing them, and it's a continuum, it's not a binary event—that we'll eventually have intuition in artificial intelligence systems, it will get better and better over time. We have people working on these problems, and they're making incremental improvements, doing one shot learning having intuition, having more human-like forms of intelligence embedded in them. The progress is slow and it's a very challenging problem and it may take decades, it may take years, so I'm definitely on the optimistic side, that we are on a path towards understanding these human abilities better, and how to translate them to machines. It's happening and people are working on them, and these are empirical questions that we have to try things, they fail, and it's okay that it's a contentious thing, because it keeps this goal in mind for people, that were actually trying to move some part of the field towards this goal.

Well, one more example: you're familiar with the nematode worm and the open worm project?

I am vaguely familiar with it as a neuroscientist.

To set it up: sometimes I think people have the sense that we don't understand our brains because well they're 100 billion neurons, but it turns out there's this worm, the nematode worm, a little bitty tiny fella, [that] only has 302 neurons in his brain. There have been people for like 20 years, who take this reductionist view that I think most people do, which is: let's figure out how a neuron works, and then… we can in theory therefore model 302 of them, and we can make something that behaves in computer memory like a nematode worm, and yet 20 years in, there's still not even consensus among people working on the project, if it is even possible. Does that suggest anything to you?

Yes. This is actually a useful model. The fact that the nematode—there's actually many examples, and I work in the domain of vision. Maybe I could give you another example just to emphasize what you've pointed out. So the nematode worm has on the order of 300 neurons. It is actually a little bit different than mammalian brains, in that the connections are mostly analog, they're not spiking, but that's just a detail.

So we have exhaustively studied the nematode worm, much as we've studied the drosophila in the context of genetics, and there's a lot of things about the nematode worm that we don't understand, that maybe it's at a behavioral level, maybe it's at the biology level, maybe it's at the microbiology level, genetic expression. How can this organism perform its various wiggling behaviors with just such a small amount of computational machinery. How are the synapses participating in activating certain kind of behaviors, how is memory stored?

That's right, we're still working on this, so, when you do a reductionist approach with any aspect of biology, almost invariably anywhere you look, the complexity is staggering, and overwhelming and then you're wondering how it's not tractable. So, one approach to making it tractable, is to abstract the worm and think about it as an organism, and it's trying to achieve certain goals: it's trying to procreate, it's trying to eat, it's trying to keep from getting eaten by birds, it's trying to conserve energy, and so forth and so on. You can actually state these things mathematically, and then you can abstract these ideas of survival and behavior into things that are tractable. That's one approach that we've taken in vision, for example, understanding mammalian vision. If you peer into a mammalian retina, it is incredibly intricate, complicated, it has lots of different kinds of neurons. New neurons are being discovered every month, and it's very vascularized. If you close your eyes, if you take a picture of your head at night, the retina, the eyeballs stand out, because it's very physiologically active, because many blood vessels are choriocapillaris, that is in the back of your eye.

It's a really intricate system, and when you look at it, a very fine grain then it seems almost intractable to figure out how it works, why it works, why was it designed that way, why are the blood vessels on top of the photo sensing elements? I'm not so familiar what a worm does during its day, but vision is little bit more tangible for me, because we have cameras, and you take a picture and there's a bunch of stuff in the image. Then you can do things like segment image, recognize objects, understand the gist of the image. Are you at a beach, are you in a mountain? Potentially there's some action or goal implied in an image.

At that level you can start thinking like, what is the organism trying to do with this image? It's trying to find objects, it's trying to navigate, then I think it becomes tractable. You can separate yourself out of all the intricate detail of the biology, and you can solve the high level problem and that has been successful for us. I can give you examples where you can state the following: that the brain is, in some way optimized through the statistics of the world, and, that there [are] manifestations of that in the biology. I mean you record from these neurons, and that's actually a great way to understand all the complexity. It's a very difficult concept to convey. We can talk about it at more length.

Well it sounds like you're saying, "I need cooked steak and the only microwave I know is broken, and I can't figure out how it works, but I know how to build a fire, and I'll just build a fire and cook this steak." Like we won't understand how the brain works, but we will just try to duplicate its functionality. Did I hear you right?

Yes. We want to make progress in science, and you don't want to state problems, you want to break problems down, and you don't want to figure out how the brain works all at once, but you can break it down in certain ways. One way to do that is to understand—take as given and it's true—that the brain is optimized to understand the statistics of the world, and, that you can mathematically, machine learning wise, define these things, semi-rigorously, and in a tractable fashion, and then you can start peering into the brain to see, oh is this theory right? Is this hypothesis right? Is it actually tuned to the statistics of the world or is it not? And in what way, and how does it help the organism? That's how we've been teasing out how vision works in mammals, for example. We're in early stages.

So there's a question I often ask people on the show, and just my vague memory is that people come down about half and half on this question, which is: narrow AI, which is what we know how to build right now—we can build a computer to play chess, we can build a computer to spot spam, and then the idea of a general intelligence, and AI that's as versatile and creative as a human—are those the same types of technology? Or put another way, does narrow AI just progressively get better and better and better until one day it's ‘general,’ or have we not even started building a general AI—like that's something completely ‘other’?

Wow, these are tough questions. My working definition of narrow AI—and I'm working very actively in this domain, so it is quite narrow—we're making artificial intelligence that can be used in products that you use in your phone, in your house, at your place of work, on a website. How I define artificial intelligence in this setting is: any system that can learn and infer.

And these are rigorous terms. They're in a context of statistical models, so, there is stuff you observe in the world, or in the social media graphs, or in web page clicks. Some of it is tangible and humanly understandable, like vision, some of it is not. The Twitter feeds and connections or communications over a telecommunications network, these have certain statistics and they're complicated, and you can build statistical models of this data. That's the learning part, and then the inference is, given that you have a model, hopefully it's good enough, it's never going to be exactly faithful to the underlying [data] potentially, but you can start doing inferences. So if you have a population of people, what's the average height of the population of people? What's the maximum height, what are the sub-inferences among the genders, what's the height of the males, what's the height of the females, ages and so forth? Given a statistical model that's faithful enough representing the data, you do inference, and this is a useful thing. So, that's the narrow AI. General AI is actually kind of hard to define. That's maybe part of the difficulty in attacking it. There are people who are working on this very actively. One entity is Google DeepMind. Their stated goal is to solve this problem first, and then use the solution to do great things for humanity.

To solve every other problem..

To solve every other problem. I don't know honestly, maybe we should spend some time defining the difference you asked. That was your question, but I told you the narrow one, and I guess the other thing is not that, maybe that would be my working definition of general AI, but I'm a little bit worried that that's kind of a distraction, that we're trying to go toward is this very lofty, high-minded goal, whether or not we're going to use it to solve every other problem. Just the pursuit itself is perhaps a distraction. The narrow AI can be extended quite a lot, and we can have incremental improvements. You can have a better voice recognition system, and this is very worthwhile, and I'm not sure when it does phase transition to general AI-like things, but I feel like it's not my focus and I feel there's a lot of great things to come out of, just pushing narrow AI as far as we can go. We're nowhere near general intelligence, so I'm not sure where that boundary is going to be, again, decades or years and that's what I'm spending my time on.

But in all fairness, 99 cents of every dollar spent on AI is on narrow AI. You have to look long and hard to find people that are actually working [on general AI]. I mean all the venture money that gets raised, that goes to solve real world problems… so when you say it's a distraction, do you mean things like this show or bombastic books about Terminator scenarios and all of that, or do you think it's actually a distraction from like funds and engineers and all of that, that's really...

Oh, it's the former. There's a lot of discussion about general intelligence and when are we going to get there, when are computers going to be sentient, and when are they going to start eradicating humanity, and these things are a distraction. But when you're immersed in this domain, and you're solving problems on a daily basis, whether they're for an academic research setting, or in a business setting, it is quite a humbling experience.

It's quite difficult to do anything, and there's many examples, but one of them that I commonly give is my experience building robots. So if you buy a bunch of stuff off of Amazon and you assemble it into a relatively cheap robot with Raspberry Pi, one of the first apps that you want to do with this agent is to get it to go somewhere, and you'll quickly find out that it's actually really hard to get a robot to go straight. So I experienced this myself, like wait a minute, how can this be so hard? The different wheels, they're commodity cheap wheels, they have different motors, they slip, they potentially have other kinds of non-precise properties, and it's actually quite hard to get a robot to go straight. So this was a very good learning lesson for me, that it's quite hard in practice to make systems work, and there's quite a lot of learnings in trying to get these things to work, and these are things that are very helpful to people.

You can build robots that work in the factory that can do pick and place, but they can't walk around and interact with humans like in science fiction. We're so far away from that, that I feel it's a distraction in that sense. Let's try to figure out these problems first, and then we can discuss about larger bigger things, but there are elements of public policy, and so, I've also experienced this in neuroscience, and this is actually my history, my path to where I am today, is partly struggling with this question of shall we try to understand the brain first? Then you can go from there to anywhere, and I decided that was actually a very difficult task, and I went to build it in computers, and trying to build computers that are intelligent, and then potentially going back to neuroscience and the brain in the future, and really understanding the nature of intelligence and how to replicate it in a fully intelligent machine.

So you and I have talked about what is intelligence and what is artificial intelligence, and it's the most common question I ask on the show, but I want to ask you a variant of it, which is, why do we have so much trouble defining intelligence?

This is something I've asked myself. I guess I would hazard a really crazy answer that, potentially it's fundamentally undecidable. I mean this in the sense of the book by Douglas Hofstadter, Gödel, Escher, Bach. He wrote this kind of singular book about the nature of intelligence, computation, what it has to do with math and music and other domains, and the central theme of this book is that there is this kind of circularity to intelligence that is not a flaw but it's an inherent deficiency in any formal system. Let me just say this Gödel theorem states this: with any powerful and expressive form of propositional logic, a logical system with axioms, that there are undecidable statements. You can state things in these logical systems that you can prove are true or false, and there's been conjecture that some mathematical hypotheses/conjunctions and so forth are potentially falling into these undecidable propositions. so that's a little bit far afield. But if you abstract out this thesis, that in any kind of complicated system, if the system tries to peer inside itself, that potentially there is going to be an inconsistency. It's undecidable, and maybe a human trying to understand itself is also of this paradoxical nature. I'm not suggesting that it's impossible, but I think that's part of the reason why it's hard—that there is this circular nature about a human trying to understand its own brain, that is potentially a large part of the intractability. That's a pretty crazy idea, but I hope that made sense.

No that does, and I love the circular nature of that book, because the last sentence of that book is identical to the first sentence so...

Oh, I didn't know that, I have it here in my office.

Oh no, I hope I'm right. It's what I remember from reading it. I was like, "wait a minute... it starts over again..."

Right, it's a pretty miraculous book, and it's interesting that just bringing this up, Hofstadter had an interesting history with the history of AI. He wrote this book, it was an incredible achievement, it got me really interested in artificial intelligence, that's where I started. I didn't quite understand the book. I had to read it several times to get a better grasp and I'm still not sure if I fully understand it. But then he kind of disappeared for a long time, and he has changed his message. He talks about analogies and so forth and so on, but he's still kind of on the side, and the dominant form of AI today is statistical.

He has much more on the propositional logic, grammar and so forth side. I guess a question I have for myself is, this has been going on for a while, that we swing from one extreme to another, and I'm wondering when we're going to start going back. As a research in this domain and someone in the business, I have to be mindful of this because this shift can have large implications. You push statistics to a certain limit, and it seems like we're kind of there, and now we're going to swing back in the other domain where we have to build expert systems and grammars and so forth and so on. You didn't ask this question, but it's something that's on my mind, and it's repeated: Skinner, Chomsky, Peter Norberg at Google wrote a really nice essay about this and he has it on his website. It keeps recurring and I think I'm just wondering when it's going to happen again.

Well I'm going to ask you one more artificial general intelligence question, which is: Is it possible that a general intelligence is literally impossible to build, like not it's going to happen in 500 years, like it cannot be built, and for any reason that doesn't resort to something supernatural? Is it possible that our intelligence is strong emergence, or something that simply is not reducible to a series of components, is it possible?

I think it's highly unlikely. Is it possible that we will never...? That would be quite an amazing thing if it's that deep a problem. It would be kind of interesting that we cannot build an intelligence because it's a property of the universe, maybe it's some grand impossibility... but I think it's unlikely.

I think the argument would be: we don't understand our brains—fair enough, okay, that might just be biology—but then we had these minds, we had these abilities that we don't really know how you derive them from cells, like a sense of humor, and then we have consciousness. We experience the world, we feel temperature, whereas a computer measures it, we can feel warmth. And you have these three phenomena, and the last one consciousness—we don't even know how to pose that question scientifically, let alone answer it scientifically. So if all of a sudden the one thing in the universe we know has general intelligence has these properties that cannot even be asked scientifically right now, then isn't it hubris to say it's definite that we can build one?

Yeah, this is just my intuition to this question is: it's actually a tractable problem that we're going to solve, and it's good to feel very uneasy at any point in time, that maybe you can or not solve this problem. That means that you're kind of hitting the boundary of our current knowledge and you're going to push beyond a boundary. If you're not of that uneasy feeling, then you're probably not working hard enough.

We understand some things about biology, but as I said before it's a very humbling experience If you look at a jumping spider, it's a type of spider that has on the order of hundreds of thousands of neurons, and it has different visual components, different kinds of eyes. That little spider can perform a really remarkable set of behaviors that are intelligent. It can identify objects, it can navigate a 3D environment, it can stalk prey—that already for me is an intelligent entity. It's quite hard to understand and characterize, much like the nematode worm. I think we are going to understand things like that, and then we can start understanding things like mammals and rats and monkeys and humans with time, and I think this kind of inexorable progress consciousness. I actually don't know when that's going to happen, as you said, it's even hard to state what consciousness is, but it's actually a subfield of neuroscience. The person who got me into neuroscience Christof Koch, has written a book about this and it is progressing and people are thinking about it.

Well you're right, and he in particular, I think he's fantastically interesting and I interviewed him earlier this year and I got to ask him all my questions. You're right, he is taking it like it is, a scientific question that is quantifiable through IIT, that it can be measured and so forth. He would certainly say that's a computational kind of problem.

Yeah, I would agree with him, but he definitely knows more about it than I do.

He is brilliant. Anyway, now let's talk about narrow AI, and I want to ask you things we should or shouldn't do with it, and I want to go back to the 1960s and Weizenbaum and Eliza. The setup is this man, Weizenbaum, a computer science guy, early, wrote a chatbot in the 60s called Eliza, and it was a simple psychology. You would type, "I'm having a bad day." It would say, "why are you having a bad day?" "I'm having a bad day because of my mother." "Why are you having a bad day because of your mother?" Blah, blah, blah.

Yeah, I'm familiar with it. It's actually still in Emacs, the text editor. If you type in ‘meta ex psychologist,’ I think then Eliza activates.

Wow, I did not know that. That's cool. Weizenbaum though, saw that people were kind of pouring their heart out to it, and they knew it was a machine, they knew it, but they still ... and this deeply bothered him, and he unplugged it, and turned against AI. He said, "When the machine says, 'I understand,' it's just lying to you, there's no ‘I,’ and there's nothing that understands anything." So when it comes to those sorts of uses of AI, to provide emotional connections maybe to lonely people or anything like that, which, those sorts of problems that really would have deeply bothered Weizenbaum, do you worry about those, or are those a relic of another way of thinking?

So let me just ask, what is the question—whether or not, when you're building a chatbot, it can be very simplistic, with simple grammar...?

Are there certain emotionally charged uses of AI that we shouldn't do, because they're corrosive to the concept of humanity, and therefore human rights and all of the rest?

Well okay, this is a great question. Actually I've not thought about this at all, and I just came back from a conference where there were people on a panel discussing building chatbots that people would even... So when you get up in the morning, one of the participants said, you're not very eager to talk to a chatbot, because it's just not human and it's very easily discernible that it's not human. In the case of Eliza maybe it did fool some people who were under a lot of duress, psychologically and they anthropomorphized the system.

I haven't actually thought about this, but yeah it is a pretty strong ethical issue. For example, Google recently announced a product that to me was just pretty remarkable that you can call it and it can make an appointment for you. I don't know the details, but it was discussed in the press… well shouldn't this agent disclose that when it's calling to make an appointment, should it disclose or not that it's not a human? And I think this is a really important question. I think it should disclose that it's not a human, that there should be no confusion from the human interacting with a chatbot, that he/she's talking to an artificial agent. Yeah, I think that is an absolutely important question. I think fairly strongly that it should be transparent that you're not talking to another human.

But even taking it one step further, you know the desktop devices made by Amazon and Google. I have them both next to me and they'll wake up if I say their name. They speak in human voices and I interrupt them all the time: "Stop," and I see my kids do the same, and it's really rude. You would never do that to a person, but of course it doesn't know. But the fact that it speaks in a human voice—likewise, if you really kind of pushed it, you made a narrow AI helper that sits and talks to people, and tries to carry on a conversation, it's rudimentary but it's built to look like a person. When it breaks, you just throw it in the dumpster. Are those sorts of things we build that look and sound like people, the way we treat them, could that have a corrosive effect? Even if I know that thing on my desk over there is just that thing, still I interrupt it, and then do I subconsciously start transferring that to the rest of my life?

Okay, so this one I haven't even thought about very much. I'm so immersed in this domain that I do subscribe to these devices and I buy them just so I can be familiar with them, but for someone who is not well versed in computer science and AI, and doesn't know about the technology, then I do feel like it could have a societal implication that these products are pervasive, and they're not human and they communicate in certain ways, that it could change the nature of how we communicate in general and it could actually be corrosive.

I can't imagine that, and that would have to be addressed through education by the companies or just generally, as this technology evolves and becomes even more sophisticated. I'm not sure if this problem will go away or more likely will become worse. So I would like to just make a general statement about these kinds of questions, these ethical questions and you asked very difficult ones. There are actually lots of low hanging fruit types of problems, like transparency, bias, robustness, and so forth.

What I've seen as encouraging is, I don't have the answer to these questions, but if you go now to a machine learning conference—like NIPS is the biggest machine learning conference in December—many of the keynote speakers are outside of the domain. They're invited to come and talk to the machine learning researchers, and it's around topics like the one you suggested, or about the future of work, or about diversity or about other things that are kind of large societal questions about the impacts of machine learning AI on society. What's really encouraging is that this is happening, and that the various smart people who run these conferences were professors or academics or work at companies. They are being good stewards of the technology and they're being very forward looking, and these questions are being addressed in these settings, whereas three or four years ago, that was not the case.

I completely agree with that by the way. I find that people who are in this industry have a real, almost to a person, have a real sense of the moment. There have been other times in the past, like [during] the Manhattan Project in the United States, and you read their contemporary letters and their discussions and they talked [about] "how will this be used," and how can this be misused?" And I found that people in the AI world, they're thinking about all these questions that I'm asking, like you and like me.

It's not surprising we don't have answers to them, I mean, I think a lot of the challenge… because I was about to ask you, when I listen to your language, you talk about computers learning, inferring, understanding, and all of these... seeing, knowing, and if you think about it, we know the computer doesn't understand it. It doesn't see anything, it doesn't infer, it doesn't learn, but we have to use those words, because we don't even have the vocabulary for what it's doing, or it would be so cumbersome because it would be caught up in statistical language about what learning actually is, patterns.

But what happens is…our very vocabulary in essence, gives humanity to these devices, far beyond what they merit on their own accord. So it's like our very language… if I were to ask you, "Does a computer understand something?" We just have to debate what that word understand means. Our entire debate is constrained by a language that was made of words that were only ever intended to apply to people or higher mammals, and we're now having to figure out, what do we call these machines? And so I think it's normal that it's all confusing at this moment.

Yeah, I mean since we talked about Douglas Hofstadter, who does think about this quite a bit, it's a system that's trying to understand itself from within, and that circularity is a challenge. We're communicating in a language that was intended for one thing, and we're co-opting it for describing things that computers really do not see: there's a photodiode connected to a bunch of transistors and there's these currents and so forth and so on. You're right, so, I think if you're mindful of it, then that's great. Another way that I deal with it is, you do actually make it into a mathematical or statistical construct, and then it's on paper and it's out of this domain of using natural language to understand itself. That's an inherently circular...

That was the original hope of AI. This goes all the way back to Descartes, the hope that everything could be expressed mathematically, and it sounds like you try to express mathematically as much as you can at least.

Yes, so these are actually questions I haven't considered since I was a kid exploring these ideas.

Wow, that makes me feel awful—I'm asking you fourth grade questions, okay.

No, no, I mean I've been so immersed in transistors and building chips and parsing large amounts of data. I think maybe it's a good time for me to step back and reflect as well. I've been mostly focused on the human side of computers and the consequences of technology and dual use technology especially. These are things that everyone is mindful of in my domain, but not so much these more fundamental questions going back to Descartes. I kind of stopped in the 30s, so with Turing and Von Neumann, that's where my field really started about artificial intelligence. But you're right that it does go deeper and there's philosophical questions that have been discussed since before Aristotle.

You know I'm known to be an optimist, and I wear it proudly, yet I'm not naive and I do know like you, that these technologies can be misused. When people ask me, “What's the most perverse use of AI?” It seems to me that it relates to privacy and specifically, we make AI that can read lips and that's a great thing, unless of course a totalitarian government uses it to record all conversation. We make facial recognition. That's a good thing, unless that same government uses it to match those words to those people. We have cellphones. That's a good thing, and we can do speech-to-text and now... The fact that in the past, we could all have some measure of privacy, because there's just so many of us, and so many phone calls, and so many of everything, that now the very tools we're building for sifting through a gazillion medical journals can build profiles on people. In some countries we can address that through legislation, but in other countries, doesn't this technology allow totalitarian governments to just really crank up their grip?

Okay, so, it's great you asked this question about privacy. Actually privacy is our top focus in my group at Intel. How do you do machine learning with some sort of guarantees of privacy? And of course it's also in the news and it's of great concern to everyone, and I think my co-worker Casimir Wierzynski had a really good analogy with that: we're kind of poised, we're making progress in the direction from going to HTTP to HTTPS. You're going from unsecure websites to secure websites.

That happened over time, and that analogy of, now to make it more concrete, there are ways to do private machine learning. There's of course, systems level things you can do to make your data secure. So if I have health records, or if I have personal financial information that I don't want anyone to know—whether it's a totalitarian government or a teenage hacker—there are different ways to do machine learning with guarantees of privacy. Some of the terms are things like differential privacy. It's a statistical way of guaranteeing that you can't reverse engineer a query to a database through a statistical model. There is homomorphic encryption, there is federated learning which is very pragmatic, but potentially not as secure.

There are certain ways for us to build models that are more private, so that's one thing. The other thing is, privacy of course has societal implications and public policy implications, and what's really reassuring is that I, a neuroscientist, who I'm not a public policy person, I just came back from a conference, where it was entirely about public policy and I was immersed in a pool of people that included congressional staffers, and people from various regulatory bodies and NGOs and so forth, discussing exactly this topic. So that is happening. That's reassuring, as well as the people, the researchers producing this technology. If a researcher produces a technology that's able to mimic Obama's image and speech, they do point out that this technology can be used nefariously and they're ahead of the game. In terms of saying like, "Hey, I did this thing and I know it could potentially be used for bad things, just be cognizant of it.” I will be mindful of it too, as this technology develops. You're an optimist, also I'm an optimist in this regard.

What I heard you just say though is, “If we want to, we can build privacy safeguards in place.” My question is about all the state actors that don't want to. Every technology that comes along in a way empowers privacy, empowers people or empowers oppressors or not. You can see how the tide kind of shifts. I think the internet is a great source of new information. It makes it very hard to control information sources, and so I think it's an empowering technology, but are these technologies like artificial intelligence, speech recognition, and all of that, do those actually in an adversarial country where the government and the people are not necessarily in sync, does that empower the government more?

So I think this is something that applies to technology generally. Almost any technology is dual use, and this problem also has manifested itself recently in my domain. I'm not sure how to exactly answer your question, but as I mentioned, it is a topic of our research. We want to build more private ways of doing machine learning, and I think that's a really great thing that we're investing so much in it, because you will reap benefits societally from this. I cannot stop an authoritarian regime from doing nefarious things, but if I create systems that can read lips only in certain contexts… I don't know how that would actually occur, if you can obfuscate your lips. I'm sure there are ways, but if we can build systems that are better machine learning systems, but also have built into them privacy and some ethical properties, some safeguards, then that's as good as we can do.

I had a guest on the show put forth an interesting theory. He said that privacy is this temporary fad that we went through… used to live in these villages, 300 people and you lived with two generations of your family and everybody knew everything about everybody. The mailman, would say, "you got a letter from... and then..." and everything about you was known, and this goes back say 10,000 years. Then you briefly had this little period where the industrial revolution came along, and people left their home and they went to the city and they got an apartment and they had money, and they had this thing called privacy and that it was really unusual. He believes that we're going back to the old ways where you're going to know everything about everybody, assume you have no privacy and that the Victorian reality will reemerge where you know all the dirt about somebody else, but it would be impolite to mention it.

So what is your question?

So my question is, is privacy a temporary fad that we're not going to care about in the future?

Okay so, I'll attempt a couple of answers, and then I'll have to check myself if I made any sense. There is a strong incentive for businesses to ensure privacy for their customers, so there is a business use case. We want people to use our service, and if you don't guarantee that they're going to be protected, then they're not going to use you. If you're a bank, and you share information, banks do share information as has been pointed out, if you do it too much, you're going to move to a different bank, you're going to move to a credit union or whatever.

There are business incentives for doing the right thing, and then there's government regulation. GDPR is a set of regulations for privacy that was recently enacted in Europe and that's now spreading around the world. It's encoding some of our moral values and ethics into regulation that consumers have to be protected in terms of privacy. I think those are rules now, and businesses are adopting these rules, they're modifying their behavior to make sure they comply. I think this will be a virtuous thing, not a negative thing, that they will see the benefits of protecting their customers. The customers will want to use your services more and trust them more, given that there's these guarantees. So I think that we're going to move, not in the direction you were suggesting—dystopian future, Victorian future—but I think we will go in the other direction, I hope. It's going to be enabled by regulations and business incentives and potentially other things that will drive us to be more private, and also algorithmic improvements and more secure systems, and more well-thought-out systems that will keep us more secure and private.

So one last question along these lines: let's say you killed somebody and you need help from somebody you trust implicitly, a best friend, a brother, a mother, what have you, to move the body. Now clearly—I stress this is hypothetical—if you're in the room with them, and then you whisper in their ear, "I need you to help me move a body," you're not worried about it, right? Would you ever (a) say that on a cell phone today, or (b) put it in an email?

Hmm, okay. Let's see, I mean if I do something really bad like eat someone's chocolate, I would still be a little bit nervous. I don't have to kill someone to be panicked, so, let's remove the homicide from the equation. I think I would be nervous about either form of communication, because I know how they can both be hacked, of course, email versus phone... so I think the cell phone, I would naively think is more secure because there's no history of it.

So you've given up having an expectation of privacy then? Whereas with the whisper you're not worried about it at all?

You're right, at some level these systems can be hacked, I mean my voice talking to, my whisper can go out into space and some alien civilization can reverse engineer it...

But you're not really worried about that, you really would whisper it to somebody, "I just ate that chocolate, I need to go out and get more, right?" So let me ask you one more question along these lines. I will put forth a view that I actually hold, and I would like you to probably rebut it.

You mentioned recent changes in Europe and the legislation that was passed, and one of the things in that is explainability. The AI makes a decision about you like it denies you credit, and you have a right to know why. From my way of looking [at it], it seems to me that if I sold pool cleaning supplies, and I called Google one day and said, "When you google ‘Pool Cleaning Supplies, Austin,’ I'm number four and my competitor is number three... why are they three and I'm four?" And Google would have to say, "I have no idea, there's so many things that go... there are 50 billion webpages and you want us to explain the difference between three and four?" It's like it cannot be done, no human can do it, and if in fact it were ever legislated that everything has to be explainable or everything that affects you [does], that is unquestionably a limit on the advancement of AI. Your response?

Okay so on the issue of ‘explainability’ or transparency of models, and there are issues raised, there is a little bit of a historical progression in our domain. I just want to state it, so, one of the things we've been trying to do with AI models to make them relevant is to improve their performance. So we want to have vision models that can actually identify objects with high certainty. We have chatbots and chat devices that are in our kitchens and around the house that can recognize our voice. We've been very focused on getting these things to be relevant, that I talk to it and it understands me and it does it with high probability.

There's been a high emphasis on performance of machine learning models, and that's been at the cost of other things, and one of those things is transparency and explainability. I think what's happening now, is that in the process of building machine learning systems, the machine learning researcher has to understand what they're doing, such that they can make better models. The model has to be explainable to the researcher. They have tools to do that, they have ways of peering inside the model, looking at the weights, much like a neuroscientist peers into a brain, and those things are underdeveloped relative to performance. That is actually rapidly becoming better and better and better, but there are ways to investigate machine learning models, to understand why you're number four and this other person is number three, and we're nowhere near [it].

There's a lot of promise in this direction of building tools to understand models and also to build more explainable and transparent models. This is a really difficult challenge, because if you ask yourself if you want an MRI machine and a diagnostic set of software to identify a potential tumor, do you want it to be explainable or do you want it to be high-performance? And I think you would choose the latter. You don't care how it's doing it, you just want it to be really good. So there is a tradeoff there, and we've sacrificed for performance and I think we're going to start catching up on explainability and transparency.

I can tell you these are systems with numbers and matrices and operations and they're deterministic mostly and we can figure out how these things work. But if you look at humans and human behavior, we're building AIs of various forms. Humans are extremely non-transparent and unexplainable, they have these inherent biases, they're quite mysterious. I think if we're going to build AI systems, the expectation should be that they should be better than humans. But if we make them too transparent, then we're going to lose performance, I think that's an attempt to answer your question...

No that's fair. You just used a phrase, you said "they're deterministic, mostly." Which ones aren't deterministic?

Well there are statistical models that emit probability distributions, so that's what I mean. What's in this scene, it's not a dog or a cat, it's 30% dog, 70% cat.

Right, but every time you run that photo through, it's always the same, the model is deterministic.

Yes, the software/machinery is deterministic, for the most part. There are models that emit statistical ...

Fair enough, well I have taken all of your time. It's been a fantastically interesting hour, I wish we had another one, and you're clearly a very reflective person about all of these [issues] and I thank you for sharing your reflections with us.

I appreciate your time as well… really interesting questions, great discussion.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.