Episode 69: A Conversation with Raj Minhas

In this episode Byron and Dr. Minhas talk about AI, AGI, and machine learning. They also delve into explainability and other quandaries AI is presenting.

:: ::


Raj Minhas has a PhD and MS in Electrical and Computer Engineering from the University of Toronto, with his BE from Delhi University. Raj is also the Vice President and Director of Interactive and Analytics Laboratory at PARC.


Byron Reese: This is Voices in AI, brought to you by GigaOm, I'm Byron Reese. Today I'm excited that our guest is Raj Minhas, who is Vice President and the Director of Interactive and Analytics Laboratory at PARC, which we used to call Xerox PARC. Raj earned his PhD and MS in Electrical and Computer Engineering from the University of Toronto, and his BE from Delhi University. He has eight patents and six patent-pending applications. Welcome to the show, Raj!

Raj Minhas: Thank you for having me.

I like to start off, just asking a really simple question, or what seems like a very simple question: what is artificial intelligence?

Okay, I'll try to give you two answers. One is a flip response, which is if you tell me what is intelligence, I'll tell you what is artificial intelligence, but that's not very useful, so I'll try to give you my functional definition. I think of artificial intelligence as the ability to automate cognitive tasks that we humans do, so that includes the ability to process information, make decisions based on that, learn from that information, at a high level. That functional definition is useful enough for me.

Well I'll engage on each of those, if you'll just permit me. I think even given a definition of intelligence which everyone agreed on, which doesn't exist, artificial is still ambiguous. Do you think of it as artificial in the sense that artificial turf really isn't grass, so it's not really intelligence, it just looks like intelligence? Or, is it simply artificial because we made it, but it really is intelligent?

It's the latter. So if we can agree on what intelligence is, then artificial intelligence to me would be the classical definition of artificial intelligence, which is re-creating that outside the human body. So re-creating that by ourselves, it may not be re-created in the way it-is created in our minds, in the way humans or other animals do it, but, it's re-created in that it achieves the same purpose, it's able to reason in the same way, it's able to perceive the world, it's able to do problem solving in that way. So without getting necessarily bogged down by what is the mechanism by which we have intelligence, and does that mechanism need to be the same; artificial intelligence to me would be re-creating that - the ability of that.

Fair enough, so I'll just ask you one more question along these lines. So, using your ability to automate cognitive tasks, let me give you four or five things, and you tell me if they're AI. AlphaGo?


And then a step down from that, a calculator?

Sure, a primitive form of AI.

A step down from that: an abacus?

Abacus, sure, but it involves humans in the operation of it, but maybe it's on that boundary where it's partially automated, but yes.

What about an assembly line?

Sure, so I think...

And then I would say my last one which is a cat food dish that refills itself when it's empty? And if you say yes to that...

All of those things to me are intelligent, but some of those are very rudimentary, and not, so, for example, you look at animals. On one end of the scale are humans, they can do a variety of tasks that other animals cannot, and on the other end of the spectrum, you may have very simple organisms, single-celled or mammals, they may do things that I would find intelligent, they may be simply responding to stimuli, and that intelligence may be very much encoded. They may not have the ability to learn, so they may not have all aspects of intelligence, but I think this is where it gets really hard to say what is intelligence. Which is my flip response.

If you say: what is intelligence? I can say I'm trying to automate that by artificial intelligence, so, if you were to include in your definition of intelligence, which I do, that ability to do math implies intelligence, then by automating that with an abacus is a way of artificially doing that, right? You have been doing it in your head using whatever mechanism is in there, you're trying to do that artificially. So it is a very hard question that seems so simple, but, at some point, in order to be logically consistent, you have to say yes, if that's what I mean, that's what I mean, even though the examples can get very trivial.

Well I guess then, and this really is the last question along those lines: what, if everything falls under your definition, then what's different now? What's changed? I mean a word that means everything means nothing, right?

That is part of the problem, but I think what is becoming more and more different is, the kinds of things you're able to do, right? So we are able to reason now artificially in ways that we were not able to before. Even if you take the narrower definition that people tend to use which is around machine learning, they're able to use that to perceive the world in ways in which we were not able to before, and so, what is changing is that ability to do more and more of those things, without relying on a person necessarily at the point of doing them. We still rely on people to build those systems to teach them how to do those things, but we are able to automate a lot of that.

Obviously artificial intelligence to me is more than machine learning where you show something a lot of data and it learns just for a function, because it includes the ability to reason about things, to be able to say, "I want to create a system that does X, and how do I do it?" So can you reason about models, and come to some way of putting them together and composing them to achieve that task?

Well you've used the word 'reason' three times in that last bit, do you believe computers can reason?

Again, I think this will also come back to the whole point we were talking about before. Yes I believe computers can reason. That reasoning may be very trivial, that reasoning may involve simply searching over some state space to evaluate some properties and say which states of the world are not satisfying this property that I care about, but to me that's reasoning.

So, specifically about machine learning, which you just touched on as a technique to implement some kinds of AI, the core assumption behind machine learning is that the future is like the past right? Like, I take a bunch of data about the past, I study it and I make predictions about the future, that's a good way to think about like what does a cat look like. It probably looks the same tomorrow as it did yesterday, but are there problems in your mind, that that approach just does not work because the future really has no precedent in the past?

Yeah, I mean so we can look at this in, well I tend to look at this through how we view the world, right? There are a number of things where we rely on the world being the same tomorrow as it was yesterday, and they allow us to operate our daily lives. They allow us to go about doing the daily things, and then we are surprised, we learn new things, we discredit old theories and discover new ways to interpret the world, or we see new things. So we see the black swans for the first time, and it changes the mental model and we adapt and we learn, and so, while we rely heavily on the fact that the world today is more or less the same as it was yesterday, our view into that world may be bigger today than it was yesterday.

And we have the capacity to take the knowledge and information, to do something with it, to expand our view of the world, our model of the world to make new decisions based on that, to make that into the new mundane, right? So then the things that we have learned today become the new mundane and we expect them to be the same the day after tomorrow.

So, in terms of AI, it's no different, I mean there will be a lot of things where our assumption is the things tomorrow are the same today if our aim is to recognize cats in images, that's not a bad way to go, right? I mean cats tomorrow are likely very similar to what cats are today, there might be a new species that comes out that is physiologically similar, but maybe has very different features or something, that we don't yet know about, but for the most part that assumption isn't bad, it's not complete. Obviously things change, we discover new things we need to adapt to those things, we need to reason about those things, but for 95%, or I don't know how to quantify that, but for a large fraction of what we do in our daily lives, we heavily rely on the assumption that the world today is the same...

I guess I'm thinking of different types of problems, like: could you study everything I've ever said, and predict what I'm going to say next? Can human creativity be seen as studying the past and projecting it into the future? Can you use ML to help make creative machines? Those sorts of questions. Where a whole different kind of thinking is involved, and you can't necessarily train it with data about the past? Or, is what I'm about to say actually derivable from everything I've said in the past?

So, I know you're sort of posing that as a way of making a point, but I think there's more truth to what you said about humans than one may initially be willing to believe, right? We do this all the time, we say, "okay, I spent time with this person, I know how he behaves, therefore I trust him." And that's not to say that the person will not tomorrow do new things or come up with new ways of thinking about the world, or that person can’t surprise us, and our view of that person may be wrong.

Even though everybody agrees that humans are creative and they are able to respond to new things, we do this all the time, where we say "I trust X, because of these past behaviors," or "I don't trust X because of these past behaviors." Where we're saying, what we have seen in the past is an indicator of how they will do things in the future, we say that humans are creatures of habit, and that essentially is saying the same thing. Being creatures of habit does not preclude us doing things in new ways that others haven't seen before, we surprise ourselves sometimes by being in situations and doing things that we didn't think we would do, right? So we update our model of the world, but there is more truth to that than what we may be willing to admit at first.

Well I don't have any problems with that. My email program or when I'm texting somebody, it's constantly trying to guess the next word I'm going to say, based on all the words in the past, I'm just wondering, are there logical limits to that? I guess it's a different way for me to ask the question: Do you believe we're going to build a general intelligence? Let me ask it a different way, do you believe we can build a general intelligence using techniques that we already know? Can you build a general intelligence? Can you build Commander Data off Star Trek, C3P0 out of Star Wars, Sonny out of I, Robot, could you build that just by studying data in the past? Or would a general intelligence be something different using completely different methodologies and ways of understanding the world?

So that's a very interesting question. No I do not believe that we will achieve general intelligence by the techniques and things that we already have, but I don't know whether that is achievable using other means or not. In technology, I think it's a fool's errand to say something cannot be done just based on hunch. Either there are physical limits that prevent that, and there might be that we don't know about, or it's always possible. So, I don't know the answer to the broader question of whether it's achievable using things that develop in the future, but I don't think it's doable using ideas and techniques that we have now.

Building a machine learning system that looks at data from the past and classifies something or predict something about the future based on the techniques that we use now—while they are very, very useful for the tasks they do—that's not my notion of broader intelligence which includes the ability to respond to unseen things, the ability to have imagination about things that we haven't seen.

I don't know how crucial those things are because that's still kind of thinking about the way our minds work is how an artificial intelligence would also be, so the mechanism for that may be completely different. It may be able to reason about new things, about having the same notion of imagination that we do, but the current techniques I don't think are anywhere close to giving us a path to artificial general intelligence.

So somehow what the human brain or the human mind, if you prefer, does, is different than: ‘I've lived a certain life and had a certain set of experiences, which is my training data, and somehow I'm processing that data in a certain way, and making guesses about the future.’ it seems implicit in what you just said that the human brain is doing something more or different than that, that construct of the brain is just ML if that's what one thinks we are.

But again I don't think that's all we are...

What could that be then, what would be - if we're not just taking data about the past and somehow making inference into the future, what would be, not even a theory on what it is, but what would be an example of what else could be going on in the human brain?

Sure, so I think in our brain we have a notion of the world, and that notion of the world is there even though we have not seen a lot of that world. I think that's something that we are born with. All animals are born with it, so a lot of mammals, more than mammals, I think a lot of animals are born and they have a notion of the world and as soon as they're born they start operating with that notion of the world and behaving, responding to stimuli and going about their lives.

As humans we do too, I believe that we are born with some notion of the world that's passed down to us, in our genes that led to development of our brain in a specific way that then is open to learning more instances about that world more nuance, more information, more everything, but, the brain that we are born with is not a blank slate. It does have some model of the world, it has a model of what is it that I need to do to get attention, what is it that I need to do in order to get fed, what is it I need to do in order to be picked up, I mean even as kids, before we have seen anything, we responded in those ways.

A machine learning system doesn't yet have a model of the world in it. My background is in control systems, and control systems, machine learning and I think artificial intelligence have very common ancestry. In control systems one of the ways we talk about adding robustness to control loops is by what is called, internal model principle, where you have a model of the world, you have a notion of how a system responds if you do X, and that allows us to build in some kind of extrapolation, some kind of robustness to things we haven't seen, so, while I don't know what model of the world we have in our brains, I do not believe it's a blank slate. I believe there is a model of the world, that is biased [in favor of] specific ways of learning. It comes with some information about what exists and what are the ways to survive in it, and on top of that, we add color and nuance and all kinds of things we do to our lifetime of learning.

It is true, we're born with a bunch of abilities. Like a baby doesn't have to be taught necessarily to nurse, they can hold up paper plates with angry and happy faces on them, a baby can tell the difference between those, there seems to be some amount of knowledge of gravity, like, don't crawl off the cliff. They seem to have an innate fear of snakes at a very young age even when you haven't been taught [that]... And it's really fascinating to think that somehow that's embedded in that 30Mb or whatever amount of DNA we have. Somehow that all comes from one cell multiplying over and over and over again. It's really inexplicably interesting isn't it?

Yeah it's amazing, it's incredible.

So let's talk a little bit about what you do at PARC. So broadly talk about PARC's mission and history briefly and then what is your role there?

Okay so let's talk about the history. PARC was established in 1970 by Xerox to build the office of the future. That was the mission of the charter, to build the office of the future, and that led to all kinds of amazing things, [including] the personal computer, the ethernet, the laser printer. Things that enabled the office of the future led to all kinds of innovations in the IT industry that we see today.

Those innovations came out of that initial kernel of people that came together with that kind of well-defined mission, but no definition of what the technology should be. That's what they built. And so, in 2012, Xerox Corporation, the parent company for a policy research center, decided that there was a lot of innovation that came out of PARC, but not all of it made it into the world through Xerox. Even just thinking about it, the biggest value that Xerox itself got out of it was through the laser printer, in terms of computing, languages, and graphical user interface, a lot of those innovations came to fruition through other routes.

So Xerox decided that it was time to maybe try that as a model, and so Park was spun out, became a fully-owned subsidiary of Xerox, with a charter of doing research and doing innovation that's going to be with an open innovation mindset. And so what that meant was over time, we still do a lot of work for Xerox, a lot of innovation has started at Xerox, roughly half the work we do is started with them, but the other half is sort of broader than that, we do a lot of work, early-stage work funded by government grant agencies like DARPA, ARPA-E, National Institutes of Health, Department of Transportation, NASA, Air Force and Naval Research Labs and things like that. And that allows us to build for capabilities and then we also try to bring those to market through partners. We work with Fortune-500 companies and startups, and so, it allows us to bridge the whole spectrum from early-stage research, to backs to market through whoever the right partners are.

As an example, currently we are doing a project funded by DARPA on building explainable AI models and in that sense, AI is construed more narrowly as machine learning models, so, explainable machine learning models, especially targeted at some of the deep neural nets, which, because of their inherent complexity are more like black boxes, and so DARPA's interested in creating explanations for scenarios where autonomous systems are being driven by these machine learning systems.

We're doing that early stage research, now that's very focused on defensive applications and autonomous drones carrying out missions. As you might imagine, that's very applicable to a whole bunch of other scenarios. We'll kind of generalize those results and try to bring to market through our parent company Xerox or through other commercial partners that bear to other domains, other civilian and other kind of applications.

So, and within that, what does your day-to-day look like?

Are you talking about me personally or a day at PARC?

No, you personally.

Yeah, so I lead the AI Research Lab at PARC and so, part of that is for us to build. We are a small place, so we need to have a focus on where we think we are going to do things differently than others. There is a lot of good work going on in a lot of other companies and universities, and so we try to have a focus on where we think we will add value. My day-to-day is working with our team to figure some of those things out, contributing some ideas, making sure that we have the funding to do those things, making sure that we are publishing these things, capturing the IT where it makes sense, and generally building an environment where people can do interesting and crazy things.

Let's talk a little bit about explainability. I know you said it was DARPA work, but I assume you can talk about...

Yes we can talk about it. We don't do any confidential top secret work. Everything we're doing here will be published and I'm happy to talk about it.

So I have two questions about explainability. One is going to be—I think it was Groucho Marx or somebody who said that any club would have him as a member, he didn't want to join—isn't any AI that's explainable at some point, like what's the big deal then. If we can understand it, then we could have probably figured it out, and granted a computer did it faster, but it's not giving us these insights that are so far above us that we cannot understand. I guess I have several questions about it, so I'll just start with that one. Is that putting shackles on AI, to say it needs to be explainable?

I think those are two different questions, and I'll try to answer both. First is, if you are able to explain the way AI does, you can sort of figure it out it's not useful anymore. I think that kind of relies on the assumption that what we are trying to explain is the mechanism. In fact we're explicitly not trying to do that because there is no easy way to say how millions of these parameters interact together to produce the results. Similar to, it’s very difficult to say how our brains function, but I'm still able to explain to people what I mean.

Explainability is sort of like intelligence, it is one of those things which is a simple thing to understand but hard to explain, and so for us we had to sort of be clear about what we mean by explainability, and explainability for us is a way to, not explain how something will generate it but why. So you should be able to say "How would I change things for something to be different?" You should be able to talk about, like, in the context of drones, for example might be that, the task was for it to fly autonomously and provision a lost hiker, and it didn't do that. It searched and it came back, and you ask that question: “Well why didn't you go this way, why didn't you go that way?” It doesn't need to explain in terms of its parameters and underlying components, but it needs to be able to say in terms of concepts at a higher level and be able to say, there was a fire there and because of the smoke, my sensors weren't able to pick up anything so I didn't know what to do and therefore the decision was to wait out until the smoke subsides, which is explaining an action in terms of its consequences on what the task was, rather than explaining how the various parameters came together.

And the other question you had was, “Does it put shackles on AI?” And it really depends on the application. Some applications you may care about getting an explanation because either at the time of training you need to be able to build trust or in a forensic manner afterwards, you need to be able to reason about it. You may not need to generate an explanation all the time, but when you need to, it should be able to provide an explanation, but in a lot of scenarios that may not be the case. I mean if the application doesn't call for an explanation, that may not be the constraint you put on it. I don't know if that answers your questions.

I'm having just a little trouble understanding the distinction you're drawing. Going back to your drone example, the ‘why’ question. Let's take a question from history: Why did the Roman Empire fall? There's no more contentious question than that. There's so many factors at play, and every historian has their own theory, down to Lad and the Barbarians and the valuation of the currency and, I mean you can come up with a hundred reasons on why different people believe the Roman Empire fell, and that was a big event. So how would an explainable AI construct an answer if you [asked], "Why did the Roman Empire fall?"

So the AIs we have are nowhere close to being able to engineer those kind of events, so those explanations we're giving for AIs are all the kinds of tasks they are able to perform. The AI in this scenario isn't trying to ingest information about the world and create explanations about what happened in the world. The explanations that we're trying to create are about why it took specific action, right?

Let me try a different case then. If I called Google and said, "When you search for ice-cream makers, I come up number five, and my competitor comes up number four, why am I five and they're four?" I could see people at Google saying, "Who knows, there's no way to know that... Who knows, 50 billion pages..." So that is a computer system, that by law may be required to explain why this ranked higher than that.

And that's a good example, so let's discuss that. Again, at least at the level of AI we have, and the level of explanation of intelligence right there, our approach is to sort of narrow the scope of what we're trying to do. One explanation there that may be sufficient for these purposes, and that we may be able to generate using some of the ideas we're coming up with is, to explain it in terms of a counterfactual. We may not be able to explain exactly why this came about, but we should be able to sort of talk about some decision boundaries, and so to be able to say, "What would have to change for you to go to number four?" "What does that decision boundary look like? What is the smallest increment that you would have to effect for that to change?"

That still doesn't give you a broad notion of explainability, but it gives you explainability in the sense that you can act upon that information to make a change in the world, right? So you can say, "We don't know exactly how this came up, but these are the changes you can make." For example, in your case, if three more influential websites like CNN were pointing to you, our systems rating would put you at number four, right? So that doesn't give you much insight into how some things will generate, but gives you some insight into where the boundaries are, where decisions change, and what you would need to do is to affect those changes, and so when you think about laws that may acquire explainability, that may be the kinds of explainability you get in the beginning.

As AI improves and the ability to think about building these systems improve, maybe the explanations improve. So the reason for the explanation at this point is for you to effect some change in the world, and for that, giving you a notion of how far you are from a decision boundary and what kinds of input changes will push you towards the decision boundary, may be a good enough explanation, maybe not, but that is a hypothesis we have.

Fair enough, and that's sort of measuring sensitivity of the model right?

Maybe, sensitivity is part of it, right, it says what is the biggest lever to this. But this is maybe also giving you much more actionable information that says "Here are three things you could change to come about a different decision."

But in reality—just to go at this example which is fictitious—the answer would be: well, if three more high profile sites linked to you; or if you had two more occurrences of the word; or if you had one word occurring and one more site linked to you; or if you had 327 more words; or 14 more occurrences of this; or 19 more percent up from visitors from English speaking countries, or 14 more people who bought ice-cream cones in the last year; or… A thousand different factors went into that, any one of which would adjust the rankings.

So you picking one and saying, "Well, if you just had three more links, it would bump you." That appears to be explainability, but that's really shirking the question, right? It would be like an employee saying, "Why did I get fired?" And you say, "You were late this morning." Well that was true but you also had 14 bad performance reviews, and your peers rated you as this, and you’re suspected of stealing stuff and... So if you just answer with this simple, "Oh three more links would do it for you," that's really a clever way to dodge the question, isn't it?

I'm not disagreeing with you. I'm not saying that this is a full notion of explainability, but what I'm trying to say is that we need to make some of these things happen first, and then maybe you expand out from those. In your other example, somebody was fired. If the explanation is you were fired because you were late this morning.

I guess from our point of view, the contra-factual would be, what would you need to do not to get fired? And maybe the answer to that is, "be on time continually," maybe this is your 15th instance. In some sense you're absolutely right, that at some point, you're sort of pushing that can down the road, but, to me the question is: in the meantime, is it providing you something useful—more useful that before? Because whenever you do comparisons, you compare to what exists today, and is that going to better compared to what exists today?

[On the other hand,] what is the desired end state? Maybe the desired end state is still a way off, but it isn't either you were here today with nothing or you are at the other state with everything, right? Maybe there are incremental steps, and if each incremental step provides you some ability to act, some ability to change outcomes, is that useful or not? I mean that's a question and maybe that's not useful in every scenario...

I wonder, and again, I'm not chucking rocks at what you're doing, I mean this is a big problem everybody has. But explainability assumes understandability, right? If an answer is not understandable by people, explainability can't exist. So if I said, "Why did the hurricane hit Raleigh versus Tampa?" There is an answer to that. It's just physics, but it may be beyond our ability to answer, it may be butterfly wings in South America, it may be beyond our cognitive ability to understand that, and therefore you can't get explainability.

Can I pull on that thread a little bit? It might be that it says, “Oh there was a low pressure area here that caused the change in the direction and caused the weather pattern to move this way rather than that way.” That explanation may be sufficient for a lot of purposes, for making plans, for doing things, but it's not sufficient in the sense that, why did the low pressure exist there and not somewhere else. It wasn't because some butterfly flapped its wings in Argentina, and so it doesn't answer all the way down. It is still useful, so that it says, if there is low pressure formations there, the pressure's like this, the likely impact will be this and so we should plan to be out of its way.

It's a partial explanation, but it's a useful explanation that you can use to plan to avoid injuries, loss of life, whatever the case might be. To me, a reason for explanation is not simply understanding, it's also ability to do something, and maybe you get partial understanding and you get the ability to do something, and we do that all the time. We have heuristics, we have rules of thumb where we don't understand what's going on, but we can use those to make decisions about the world, and achieve a better outcome than would be achieved without those heuristics that we don't completely understand.

Fair enough. Anybody who listens to this show knows I'm an optimist and I believe in this technology, I wrote a book about it, and I think that AI is the power to make better decisions, and that is universally a good thing. To argue against it is to argue for ignorance, and yet, any technology can be used for ill as well as good, so I want to give you a few of the ‘ill’ use cases, and tell me how you think they might be solvable or if they're not solvable.

The first one is individual privacy. Back in the day all of our privacy in the end was guaranteed because there are just so many of us writing so many things having so many conversations and all of that. When a machine can understand all speech phone calls, it can read lips, which we have AIs that can do that almost as well as a human, and so every CCTV camera also can listen, can read every email, I mean you actually can, without a lot of difficulty, imagine a dystopian state and again it won't even require special tools. It's just normal data mining tools, it says, I have all this data about all these people, surface people that are problematic, based on this definition of problematic. You don't have to even think too far ahead to see how a government could misuse that, is that an intractable problem or what's the solution to that?

So that's an interesting question. It's one that I actually think about quite a bit because we can see some of these things around the world. I think, to me, one way to solve this, and many other problems that come from technology, is social norms about what is acceptable or not, right? And so, what are we willing to give up in exchange for advantages that come from having cameras at every intersection? England has made the choice that you have CCTV cameras everywhere and people are observed, and when crime happens they can play back and track things and that led to very quickly finding the people who did the train bombings.

In China there was a recent experiment where a journalist tried to hide from the surveillance state and was found out in ten minutes, but in many other countries, the social norms are different. We have in Canada and the US sort of more of a cultural norm towards privacy about keeping in check the power we give to others, especially the state.

In terms of technology, a lot of these things could happen today, and in places where they don't happen are places where there are social norms about how we want to behave, how we want to be governed, and to me there may be technological solutions that could help with these things but those technological solutions are only going to be useful as long as we have strong social norms around these things.

I grew up in India [where] the idea of privacy was very different in [my] small town, everybody knew everybody else's business. That was a cultural norm. It was there before technology existed that could monitor people at a large scale, and then I moved to Canada for graduate study, where the social norms were very different.

If you take those two scenarios, in one case that technology monitors things, may have more receptive audience until people start to value privacy, start to value individual ability to opt out of these things, it might be that the social norms lead to legislative norms, technology will play a role in it, but it cannot, in [the] absence of those.

Fair enough. Let me throw another one at you. We see a lot of high-profile attacks on corporations, banks and all of that, where just staggering numbers of, let's say just credit card information or identification information is stolen. And then we're kind of used to that, you know the hackers do one thing and then the white hat people learn a new trick and then they learn around it, and it's an old dance.

But with IoT devices, you don't have it right? Any device that you plug in, like an oven, that connects to the internet is probably not upgradeable. Does it concern you that we kind of hardcode, and we add whatever it is, a million devices a minute or whatever to the internet, that are not upgradeable and have security vulnerabilities? Does that worry you that you can actually imagine scenarios where things can't be patched or hacked or fixed, and you just have big problems, not just minor annoyances, but big problems as a result of it, because you can see it in larger systems too?

I mean it is a big problem and is one where we are actually trying to do some research to sort of figure out how you can look for problems in cyber-physical systems of intrusion and other things by incorporating again this sort of model of the world. But it is a problem that so many things are coming on without thinking about security and privacy right from the get go, and it is not unprecedented. Most technologies start out this way with: ‘let's see what can be done,’ and as they become big and as they become useful, then we lay it on other things. And so, because it's not unprecedented, I'm hopeful that we'll be able to solve these issues, but it is definitely a big problem, and it will not be solved without a lot of steps along the way.

And what about infrastructure? Do you believe that in the West, things like the electrical grid, and water treatment and all that are fundamentally brittle systems that can be severely damaged, relatively easily?

Yeah, I mean as I said we are doing some DARPA funded research in this area because DARPA believes and we believe that we have some sort of possible handle around this, how to do that better, those are definitely issues. Even in places where you don't think, or you may think that some things are air-gapped and cut off from the world, that may not be the only way to get to them. There are ways using social engineering and others to get agents on that can disrupt the infrastructure. So a combination of two things: a lot of things being connected, and infrastructure being so crucial to the way that life goes on, those two things go together, I think, point to a number of problems that could happen, for sure.

So I'll give you just one more, which is just a little off of AI, but, with Crispr/CAS9 and genetic engineering, for a relatively unsophisticated actor to build a pathogen—you know all the scenarios—does that worry you? Biotech/biohazard/biowar?

All of these things can be misused, so like for the longest time we've been talking about the dirty nuclear bomb, because those technologies have gotten small enough that can cause impact. You can engineer pathogens, you can engineer them to target very specific kinds of genes so you could imagine them targeting very specific populations. There are issues of course with all of these things, and there have always been with any kind of technology that we have developed. I'm optimistic in the sense that throughout the years, we have found using technological but also societal ways of dealing with these problems.

Well, I mean I agree with all of that, but, that being said, it appears it seems like asymmetry is going up. You're right, [with] metallurgy, you can make swords or plough shares. Some people make swords, some people make plough shares, but no person with a sword is going to kill a billion people. The idea that with less and less effort, you can create evermore destructive forces, that does feel somehow different, but comment on that.

But then, I'm going to push on this a little bit. You know this from history, using the fact that there were certain diseases that people were immune to, they grew up exposed to them in Europe using smallpox to wipeout large populations, and these are not new things. It's a question of what we accept as a society. The fact that those kinds of things are not acceptable anymore, the fact that you have countries and actors with chemical weapons, and as a world we have decided we're going to punish people who use them. I think those are good signs, right?

What I think you're referring to is the fact that that ability has moved from state actors to individual actors and therefore it's harder to regulate, and I agree with that problem. But I think I am optimistic that as a world, as a society we are going to not stand for these kinds of things and much more than technology, it's our norms, it's our social contracts that will handle these problems.

So now put on your rose colored glasses, your optimistic glasses, tell me all the ways this technology can go right. In your mind, what's the future going to be like in a reasonably likely scenario with evermore AI, faster machines, more data collection, and all of the rest? Entice us with possibility.

So to me this idea that machines are able to do a lot of the things we do today, -- one of the biggest advantages I see for that is that it opens us up for more leisure time and more creativity, which are, today, very human endeavors. The amount of time that we as a society have had to spend on just providing for basic necessities for food, shelter, for energy right, all of that has been going down tremendously. If AI pushes that frontier forward and lots of the things that we spend time on today out of necessity, not because we want to, so we don't have to drive from point A to point B, because we can be driven more easily. Or, because we are able to get better access to healthcare and therefore don't have to spend a lot of time being infirm, it opens us up to be much more creative, to have more leisure, to be able to reflect, to be able to do lots of things that we can't do today, or only a small fraction of us can do today, because we have to spend a lot of our time just kind of providing for the necessities of life.

If everybody or a fraction of the world can have the luxuries I have, you have, that opens them up for doing different kinds of things. It changes the way we think about human rights, it changes the way we think about equality in the world. It changes the way in which we think about the human existence. That could mean the possibilities of advancement of us as a society, of us a people, are tremendous if we can, using technology, make it easier for people to spend less time on doing tedious things that need to be done, but not necessarily by them—so that saying that just the future is here today but it's only unevenly distributed.

There are a lot of people who are able to spend their time reflecting, thinking about philosophy, writing books, because they have the luxury of doing that, because they have either through circumstance or their own endeavors, achieved this kind of life which allows them that flexibility. Now if AI can enable a large chunk of the population to do that, I think that's tremendous.

And then what about the so-called 'bottom billion,' those people on the planet (and granted the number is shrinking) who are desperately poor, who are completely cut-out from these technologies? How, in your mind, do the benefits of these technologies help them or allow them to help themselves?

To me I think that's where the biggest potential is, because as I said, if you are in the top 10% of the world, even if you don't have AI, maybe you have other means providing you the luxuries of life, like maybe you don't have to struggle with a lot of those things. If AI can enable those capabilities to come to that bottom billion, if it enables somebody else to… and it's here and now.

Imagine you are in a small town in India and your ability to get on the internet and take a course taught by a top professor at MIT or Stanford. Imagine suddenly the way your life could change, the way your outlook could change, and those are just through user simple technologies that we have today. If AI enables less child mortality and AI enables better transportation infrastructure, if AI enables better ways of getting medications out in the wild, faster access to medical care, even if you're not close to a hospital. I think the potential impact is biggest there.

I live in Palo Alto, so I'm surrounded by good hospitals, good area, and maybe the advances in AI will have some impact on me, but my life today is pretty good. For me, the advances in AI enable more luxury, more things, for somebody else. If I'd been born in a different part of the country in India, my life could have been different and so, the potential for access to these technologies as they become more pervasive, as they become cheaper, as they become more scalable. I think the bottom billion in my mind is where the biggest potential is. I could be wrong but that's my view.

Well let's hope so, and that's a great message of hope on which to end this. I want to thank you Raj, it's been a far-ranging and fascinating discussion, I want to thank you for engaging on a wide range of topics.

My pleasure.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.