Episode 86: A Conversation with Amir Husain

In this episode Byron talks to fellow author Amir Husain about the nature of Artificial Intelligence and Amir's book 'The Sentient Machine'

:: ::

Guest

Amir Husain is the Founder and CEO of SparkCognition, a member of the Board of Advisors at the University of Texas in Austin's Department of Computer Science,  the AI Task Force and the Council on Foreign Relations. He is also the author of The Sentient Machine: The Coming Age of Artificial Intelligence.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, and I'm Byron Reese. Today my guest is Amir Husain. He is the founder and CEO of SparkCognition Inc., and he's the author of The Sentient Machine, a fine book about artificial intelligence. In addition to that, he is a member of the AI task force with the Center for New American Security. He is a member of the board of advisors at UT Austin's Department of Computer Science. He's a member of the Council on Foreign Relations. In short, he is a very busy guy, but has found 30 minutes to join us today. Welcome to the show, Amir.

Amir Husain: Thank you very much for having me Byron. It's my pleasure.

You and I had a cup of coffee a while ago and you gave me a copy of your book and I've read it and really enjoyed it. Why don't we start with the book. Talk about that a little bit and then we'll talk about SparkCognition Inc. Why did you write The Sentient Machine: The Coming Age of Artificial Intelligence?

Byron, I wrote this book because I thought that there was a lot of writing on artificial intelligence—
what it could be. There's a lot of sci fi that has visions of artificial intelligence and there's a lot of very technical material around where artificial intelligence is as a science and as a practice today. So there's a lot of that literature out there. But what I also saw was there was a lot of angst back in 2015, 2014. I actually had a personal experience in that realm where outside of my South by Southwest talks there was an anti-AI protest.

So just watching those protesters and seeing what their concerns were, I felt that a lot of the sort of philosophical questions, existential questions around the advent of AI, if AI indeed ends up being like Commander Data, it has sentience, it becomes artificial general intelligence, then it will be able to do the jobs better than we can and it will be more capable in let's say the ‘art of war’ than we are and therefore does this mean that we will lose our jobs. We will be meaningless and our lives will be lacking in meaning and maybe the AI will kill us?

These are the kinds of concerns that people have had around AI and I wanted to sort of reflect on notions of man's ability to create—the aspects around that that are embedded in our historical and religious tradition and what our conception of Man vs. he who can create, our creator—what those are and how that influences how we see this age of AI where man might be empowered to create something which can in turn create, which can in turn think.

There's a lot of folks also that feel that this is far away, and I am an AI practitioner and I agree I don't think that artificial general intelligence is around the corner. It's not going to happen next May, even though I suppose some group could surprise us, but the likely outcome is that we are going to wait a few decades. I think waiting a few decades isn't a big deal because in the grand scheme of things, in the history of the human race, what is a few decades? So ultimately the questions are still valid and this book was written to address some of those existential questions lurking in elements of philosophy, as well as science, as well as the reality of where AI stands at the moment.

So talk about those philosophical questions just broadly. What are those kinds of questions that will affect what happens with artificial intelligence?

Well I mean one question is a very simple one of self-worth. We tend to define ourselves by our capabilities and the jobs that we do. Many of our last names in many cultures are literally indicative of our profession. You know goldsmiths as an example, farmer as an example. And this is not just a European thing. Across the world you see this phenomenon of last names just reflecting the profession of a woman or a man. And it is to this extent that we internalize the jobs that we do as essentially being our identity, literally to the point where we take it on as a name.

So now when you de-link a man or a woman's ability to produce or to engage in that particular labor that is a part of their identity, then what's left? Are you still, the human that you were with that skill? Are you less of a human being? Is humanity in any way linked to your ability to conduct this kind of economic labor? And this is one question that I explored in the book because I don't know whether people really contemplate this issue so directly and think about it in philosophical terms, but I do know that subjectively people get depressed when they're confronted with the idea that they might not be able to do the job that they are comfortable doing or have been comfortable doing for decades. So at some level obviously it's having an impact.

And the question then is: is our ability to perform a certain class of economic labor in any way intrinsically connected to identity? Is it part of humanity? And I sort of explore this concept and I say “OK well, let's sort of take this away and let's cut this away let's take away all of the extra frills, let's take away all of what is not absolutely fundamentally uniquely human.” And that was an interesting exercise for me. The conclusions that I came to—I don't know whether I should spoil the book by sharing it here—but in a nutshell—this is no surprise—that our cognitive function, our higher order thinking, our creativity, these are the things which make us absolutely unique amongst the known creation. And it is that which makes us unique and different. So this is one question of self worth in the age of AI, and another one is...

Just to put a pin in that for a moment, in the United States the workforce participation rate is only about 50% to begin with, so only about 50% of people work because you've got adults that are retired, you have people who are unable to work, you have people that are independently wealthy... I mean we already had like half of adults not working. Does it does it really rise to the level of a philosophical question when it's already something we have thousands of years of history with? Like what are the really needy things that AI gets at? For instance, do you think a machine can be creative?

Absolutely I think the machine can be creative.

You think people are machines?

I do think people are machines.

So then if that's the case, how do you explain things like the mind? How do you think about consciousness? We don't just measure temperature, we can feel warmth, we have a first person experience of the universe. How can a machine experience the world?

Well you know look there's this age old discussion about qualia and there's this discussion about the subjective experience, and obviously that's linked to consciousness because that kind of subjective experience requires you to first know of your own existence and then apply the feeling of that experience to you in your mind. Essentially you are simulating not only the world but you also have a model of yourself. And ultimately in my view consciousness is an emergent phenomenon.

You know the very famous Marvin Minsky hypothesis of The Society of Mind. And in all of its details I don't know that I agree with every last bit of it, but the basic concept is that there are a large number of processes that are specialized in different things that are running in the mind, the software being the mind, and the hardware being the brain, and that the complex interactions of a lot of these things result in something that looks very different from any one of these processes independently. This in general is a phenomenon that's called emergence. It exists in nature and it also exists in computers.

One of the first few graphical programs that I wrote as a child in basic [coding] was drawing straight lines, and yet on a CRT display, what I actually saw were curves. I'd never drawn curves but it turns out that when you light a large number of pixels with a certain gap in the middle and it's on a CRT display there there are all sorts of effects and interactions like the Moire effect and so on and so forth where what you thought you were drawing was lines, and it shows up if you look at it from an angle, as curves.

So I mean the process of drawing a line is nothing like drawing a curve, there was no active intent or design to produce a curve, the curve just shows up. It's a very simple example of a kid writing a few lines of basic can do this experiment and look at this but there are obviously more complex examples of emergence as well. And so consciousness to me is an emergent property, it's an emergent phenomenon. It's not about the one thing.

I don't think there is a consciousness gland. I think that there are a large number of processes that interact to produce this consciousness. And what does that require? It requires for example a complex simulation capability which the human brain has, the ability to think about time, to think about objects, model them and to also apply your knowledge of physical forces and other phenomena within your brain to try and figure out where things are going.

So that simulation capability is very important, and then the other capability that's important is the ability to model yourself. So when you model yourself and you put yourself in a simulator and you see all these different things happening, there is not the real pain that you experience when you simulate for example being struck by an arrow, but there might be some fear and a why is that fear emanating? It's because you watch your own model in your imagination, in your simulation suffer some sort of a problem. And now that is a very internal. Right? None of this has happened in the external world but you're conscious of this happening, so to me at the end of the day it has some fundamental requirements. I believe simulation and self modeling are two of those requirements, but ultimately it's an emergent property.

You said a minute ago you think it'll be a matter of decades before we have a general intelligence. Have you ever given thought to the question of whether General Intelligence would need to be conscious? Would it need to experience the world to actually be intelligent? Are you envisioning a kind of computer general intelligence that's creative and all the rest, but actually doesn't have any experience at all?

This is a very tricky one because you know it's very hard to say whether I have an experience. You can project on me the fact that I have experiences because we both are human beings and you have experiences and therefore I might have experiences as well, and we may use the same words but they may mean very different things. It's the age old issue of when you say ‘blue’ and I say ‘blue’ are we talking about the same, not just color but the same feeling? And we don't really know. And in many cases obviously not because some people like certain colors and they have a certain effect on them and other colors are completely different, and the same colors have a completely different effect on other people.

So I don't know that there is such a thing as being able to judge whether a machine has a subjective experience. At a certain level of complexity, it boils down to a conversational interaction with a human being where the machine is attempting to convince the human that it does indeed have a subjective feeling. If it's able to convince a human of this fact, then that's the best that any human has been able to do with another human.

Again I don't know of a scientific test, absent interaction with that entity—
whether it's a future AI entity that claims to have subjective experience or another human being—
that outside of conversation and outside of just getting a sense for whether this thing has subjective experience, that there is a very clear cut test. And so in the absence of that test that's really what you have. If the machine can convince you, then I guess it does.

But you know we assume dogs feel pain although no dog has ever told us that we infer it. There's similarity in biology to us their response that they have to it. But we don't actually know. But we're confident enough that we that we establish laws against animal abuse. So you're setting a really high bar. It's like the AI could experience pain. But it isn't smart enough to convince us that it's experiencing it. And so ethically it's fine for us to keep abusing it. Is that true?

Actually so you raise two points. First on the animals, we absolutely know that animals experience pain, the analogy of animals experiencing pain and a machine being conscious and experiencing subjective feeling are two totally different things. Because when an animal experiences pain, first of all you can observe that it changes its behavior in a behavior that's similar to other animals, i.e us when we experience pain. So there's that one bit of evidence.

The other thing is that now that we have medicine, even veterinary medicine quite advanced, we can see exactly what's going on in the animal. We see that the animal has nerves. We see that these same nerves transmit pain in human beings. So therefore the existence of these nerves would indicate that the animals can experience pain and on and on and on. And then we provide the animal with a painkiller and it stops to exhibit those painful sort of wincing and crying sounds and therefore you know that the same chemical that has the effect on the human is having the effect on the animal as well.

Now at that point you could I suppose still say... but do you really know? And the answer might be, ‘no you don't really know’ but at the same time there's such a thing as Occam’s razor. Any explanation other than the one that I'm giving you—which is that animals probably experience pain—given all of this, would be far more complex and therefore from a scientific standpoint wouldn't really be the primary explanation you would default to in the presence of all of this evidence, so I don't think it's the same thing.

Okay let me jump in there because in the United States, veterinarians were taught until the 90s, animals didn't feel pain. And we had all of the evidence that you just listed: observation, we understood neurology and all of that. And yet still the standard of care in veterinary science was that animals don't feel pain. But I'm going to give you all of that.

But in return you have to admit we still don't know how far down that goes. We don't know if cockroaches experience pain. We don't know if a fish experiences pain, like there is a border at which we don't know. And I guess that's what I worry about is: is it ever the case that machines would exist at this border where we don't really know—for instance, could a tree feel pain? We wouldn't know. I mean I don't think it could. But I wouldn't know if they could, would I? And so I worry about that.

If you're right and consciousness is an emergent property and someday we're going to have enough complexity that it just kind of emerges, how will we [know]? And you're right, we won't know that the creature [experiences things] until it can convince us, and I worry about the gap. But maybe I'm just thinking too far out—that there's nothing we really need to worry about at this point.

Oh well I mean I don't think that's something we need to worry about at this point, but at the same time itstill makes for an interesting conversation and I'm happy to engage. I think the point that I was making earlier was not that if an AI system was getting into that realm of consciousness with an emergent phenomenon like consciousness suddenly showing up, that then we would ask it to convince us that it has subjective feeling, and if it can't, then we would somehow abuse it. I don't think that was my implication. My implication was simply, does it have subjective feeling or not?

If it is able to convince you that it has subjective feeling, then you have to assume that it does. If it doesn't convince you that it has subjective feeling, it doesn't mean that you start abusing it. So I mean maybe trees don't feel pain, there's nothing about a tree that winces or cries. For most normal people they would look at a tree and say it probably doesn't experience pain, but at the same time that doesn't mean you go about wantonly chopping trees for no reason. At the end of the day animals experience pain, [yet] we still kill animals and eat them, right? So that's just humanity for you.

But I don't know that once you realize that trees may not experience pain, that still doesn't make a good argument for going around and wantonly causing them potential pain. Most normal people wouldn't do that.

Let's talk a little bit about SparkCognition. I guess I have two questions: one just for the listener who's not familiar with your company, explain what SparkCognition does and then maybe talk for just a minute about what problem you created it to solve.

SparkCognition is the most ambitious endeavor of my life. I've been into artificial intelligence for many, many, many years and I've done previous companies, but SparkCognition is something very special. We're applying artificial intelligence to industrial scale problems. There are three areas in particular where we're very focused: industrials, which includes manufacturing and power generation and so on and so forth; aviation, and this includes civil aviation of all types including new autonomous aviation systems; and then finally defense. So these are three key areas that we've been focused on.

And what I'm interested in are AI systems that make a difference in the real world, meaning AI systems that actually control something in the real world or get data from the real world and make a decision about it. But you know I'm not of the variety of AI practitioner that can spend a lifetime trying to optimize ‘click through’ rates on ads. That's not me.

I like marrying this technology with big physical problems [like] ‘how can we invent a future aircraft and how can we reinvent that idea with AI at its center?’ How does the future of autonomous comeback look and what are the ethical challenges and the infrastructure challenges, the technology challenges? SparkCognition is focused on these three areas.

I think the application of AI to large scale industrial assets is going to be a major undertaking in this century. I call it essentially ‘the AI century’ and where the company is completely committed with a major partnership with Boeing. We've actually done a joint venture with them for AI based aviation with one of the largest turbine companies in energy and obviously working very closely with the DoD on the defense side. So we're trying to solve big problems in the real world and actually apply AI.

Do you have trouble hiring talent right now?

You know to be very honest with you, we have not yet had trouble hiring talent. I think the reality is that UT Austin is one of the most amazing assets to have in this town. I'm lucky in that I'm a Longhorn and my wife is a Longhorn, and we've got many Longhorns in the familiy. Our ties with UT run deep. Our chief science officer at SparkCognition is the two time chairman of UT computer science. We've got very deep connections with many researchers at the University, [and] we have worked closely with the students there.

We've really gone out of our way to develop a very tight connection with UT Austin and now also a couple of other universities outside of Texas. And that has been, I think, a very big plus. We've got access to great talent and on the more senior people—these are folks obviously that have just done their PhDs or whatever—but on the more senior front, we've brought in some absolutely amazing people from outside Austin, the kinds of experience for example in government and military that didn't exist in Austin five years ago at all. And those people we've been able to convince with just the breadth and depth of our mission.

Ultimately we aren't just talking about the dream of AI and how nice it could be. We're working with the largest aviation company in the world to build the next generation technology that will redefine aviation for the next many decades. And there's a lot of smart people that, given a cause like that, and given a challenge like that, will absolutely jump at the opportunity.

Ultimately in life you're looking to write your own story and hopefully the last chapter will leave you some nice sentences and some nice reflections to recite when you think about all the great things you've done. You know there needs to be something meaningful in there, and I think SparkCognition is a genuine opportunity for big thinkers to come and do meaningful things and that has been a big boon for us.

You know the whole application of artificial intelligence to warfare is such a hot button thing. And one side says “Look if you can make weapons systems more effective and have less collateral damage and achieve their mission better, why wouldn't you?” And then other people have a lot of concerns about machines making autonomous kill decisions.  If you're somebody who's involved very deeply in it, how do you think it through? Maybe share some of your thoughts about about it, using AI in warfare.

Absolutely. So first of all, one can say these things about oneself and it's up to people that know you and don't know you to judge them to be true. But I'm no war monger. I mean I'm not of the classic mold of some sort of crazed barbarian. The idea of course is that AI is going to be applied to defense much like every advanced technology is applied to defense. Every technology in your iPhone came from DoD. It's being implemented in DoD and so on and so forth. There's a very, very tight connection between technology and the military—always has been, literally since the beginning of technology. So that's just a given. Technology in defense is inevitable.

Now the question is: if AI is the new technology and I am responsible for creating new types of AI and I'm building new systems that do things in the physical world, I can either turn my gaze away and I can be oblivious to the applications of it or I can engage with those who may employ this AI in ways that would be troubling to my ethics and morals. I am all for engagement.

Now, once you engage what are you trying to do? Are you just trying to sell them AI? Actually in my case that's not just what I'm trying to do. What I'm trying to do is to develop systems that exhibit and comply with the laws of armed conflict. In other words, the system that is complying with the laws of armed conflict will be an ethical system per the current definition of the ethical way of waging war. You have to comply with the law of armed conflict, and in these systems the idea is to actually minimize the amount of damage these systems do.

Just very briefly, in the Gulf War we saw the second offset which was essentially the move away from dumb bombs delivered by dozens and dozens of bombers in a single strike where you could drop hundreds of bombs and you couldn't take out a single bridge. But yet those bombs would fall around and would maybe destroy homes and villages and so on and so forth. It was so inaccurate that it was inhumane. In the Gulf War we took two thousand pound bombs and mated them with G.P.S. devices and were able to avoid collateral damage, at least in the way that it took place in World War I and World War II.

The next step is the third offset and the leverage of the AI technologies. Ultimately if you could end a war by declaring war and taking out 17 people that are the cause of war, then the war would be over, your objectives have been achieved. There is no need for the kind of massive collateral and other systemic infrastructure damage necessary. And intelligent weapons particularly with AI, with the level of discernment that they have, do present that possibility. But beyond that this is the kinetic use of AI, but there are many uses of AI and DoD which have nothing to do with kinetics. For example: predictive maintenance—simply making the equipment run properly, cutting down on the amount of fuel that's used in mobile generators by keeping them running in an optimal state, being able to interpret automatically surveillance footage so that you don't make mistakes and that you do proper target identification even when a human is doing that identification.

People suddenly jump to this assumption of killer robots. It's not that simple. There is a lot

more going on and only with engagement do you learn and only with engagement do you get to influence and shape. And in my particular case, I am not a fan of war; no great general is a fan of war. Anybody that's been in war would have to be very strange to continue to be a fan of war. So it's not about that. It's about the fact that war is a fundamentally human endeavor.

We have not yet gotten to the level of human development to where war would become an impossibility. And while it remains a possibility, it is better to engage and it's better to control these technologies than to simply turn a blind eye and say “I'm just going to blindly create things, never engage with anybody that might use them in a bad way, and good luck to you all.” So to me I've made the choice of being actively involved.

All right. Well we're actually at the bottom of the hour so how do people keep up with you personally? Are you writing still? Do you have a blog, and then how do they keep up with SparkCognition and the interesting things you're doing?

Oh absolutely. So I am on all social media. You can just go to www.AmirHusain.com, that's my website. You'll see all my writings, all my social handles, videos, everything there, and then SparkCognition which is the corporate www.SparkCognition.com

Well great. It's been a fascinating half hour. We touched on some really big ideas. The name of the book again is, The Sentient Machine: The Coming Age of Artificial Intelligence, and it's as fascinating as this half hour was. Thank you Amir.

Byron, thank you very much, I hope to see you soon.