Episode 57: A Conversation with Akshay Sabhikhi

In this episode, Byron and Akshay talk about how AI augments and informs human intelligence.

:: ::

Guest

Akshay Sabhikhi is the CEO and Co-founder of CognitiveScale. He's got more than 18 years of entrepreneurial leadership, product development and management experience with growth stage venture backed companies and high growth software divisions within Fortune 50 companies. He was a global leader for Smarter Care at IBM, and he successfully led and managed the acquisition of Cúram Software to establish IBM's leadership at the intersection of social programs and healthcare. He has a BS and MS in electrical and computer engineering from UT at Austin and an MBA from the Acton School of Entrepreneurship.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, I'm Byron Reese. Today my guest is Akshay Sabhikhi. He is the CEO and Co-founder of CognitiveScale. He's got more than 18 years of entrepreneurial leadership, product development and management experience with growth stage venture backed companies and high growth software divisions within Fortune 50 companies. He was a global leader for Smarter Care at IBM, and he successfully led and managed the acquisition of Cúram Software to establish IBM's leadership at the intersection of social programs and healthcare. He has a BS and MS in electrical and computer engineering from UT at Austin and an MBA from the Acton School of Entrepreneurship. Welcome to the show, Akshay.

Akhay Sabhikhi: Thank you Byron, great to be here.

Why is artificial intelligence working so well now? I mean like, my gosh, what has changed in the last 5-10 years?

You know, the big difference is everyone knows artificial intelligence has been around for decades, but the big difference this time as I'd like to say, is there's a whole supporting cast of characters that's making AI really come into its own. And it all starts firstly with the fact that it's delivering real value to clients, so let's dig into that.

Firstly, data is a field for AI and we all know with the amount of information we're surrounded with, we certainly hear about big data all over the place, and you know, it's the amount and the volume of the information, but it's also systems that are able to interpret that information. So the type of information I'm talking about is not just your classic databases, nice neatly packaged structured information; it is highly unstructured and messy information that includes, you know, audio, video, certainly different formats of text, images, right? And our ability to really bring that data and reason over that data is a huge difference.

We talk about a second big supporting cost or supporting character here is the prominence of social, and I say social because this is the amount of data that's available through social media, where we can in real time see consumers and how they behave, or whether it is mobile, and the fact that you have devices now in the hands of every consumer, and so you have touch points where insights can be pushed out. Those are the different, I guess supporting costs that are now there which didn't exist before, and that's one of the biggest changes with the prominence and true, sort of, value people are seeing with AI.

And so give us some examples, I mean you're at the forefront of this with CognitiveScale. What are some of the things that you see that are working that wouldn't have worked 5-10 years ago?

Well, so let's take some examples. So, we use an analogy which is, we all sort have used WAZE as an application to get from point A to point B, right? When you look at WAZE, it's a great consumer tool that tells you exactly what's ahead of you: cop, traffic, debris on the road and so on, and it guides you through your journey right? Well if you look at applying a WAZE-like analogy to the enterprise where you have a patient, and I'll use a patient as an example because that's how we started the company. You're largely unmanaged, all you do is you show up to your appointments, you get prescriptions, you're told about your health condition, but then once you leave that appointment, you're pretty much on your own right? But think about everything that's happening around you, think about social determinants, for example, the city you live in, whether you live in the suburbs or you live in downtown, the weather patterns, the air quality, such as the pollen counts for example, or allergens that affect you or whether it is a specific zip code within the city that tells us about the food choices that exist around me.

There's a lot of determinants that go well beyond your pure sort of structured information that comes from an electronic medical record. If you bring all of those pieces of data together, an AI system is able to look at that information and the biggest difference here being in the context of the consumer, in this case, the patient, and surface unique insights to them, but it doesn't stop right there. What an AI system does is, it takes it a step or two further by saying, "I'm going to push insights based on what I've learned from data that surrounds you, and hopefully it makes sense to you. And I will give you the mechanisms to provide a thumbs up/thumbs down or specific feedback that I can then incorporate back into a system to learn from it. So that's a real life example of an AI system that we've stood up for many of our clients using various kinds of structured and unstructured information to be brought together.

And you mentioned a word in passing there, what's your view, or what's Cognitive Scale’s view on explainable AI?

Very strong actually, and Byron that starts with some of the industries that we work in. The concept of explain-ability, people have this view that AI is a black box, the machine comes up with an insight or it comes up with a prediction or recommendation, but it truly can't tell me why it came up with that, right? We've developed our technology from the standpoint of, right from the beginning, you have to have the evidence and explain-ability built into this because, when I go back to the example I just gave you: You are in highly regulated industries like healthcare or financial services, and relying on AI to give you insights, whether it is, insights going to your financial clients, or your patients or your compliance department. These have to be auditable, they have to be tracked, and you have to be able to have a stream of, a connection between why this insight was delivered in the first place. In fact, I would go as far as to say, many of the industries we work in regulate even the insights that are being delivered from the AI to the consumer, but bring a human in the loop. That's still important today because you haven't yet trusted the machine. So, that's why explain-ability is step one, in fact one of the most important ingredients, in my opinion, of making AI successful and adoptable in the enterprise.

Well, you added that, "in the enterprise" part because I guess there are spheres where, you know, "where should I have dinner tonight?" I don't necessarily need to be able to dissect that system. So if there's a group of low importance things like that, that don't have to be explainable and then there's a group of things that are very important and do [have to be], isn't it easier to develop the former? Isn't explain-ability fundamentally going to hold back, even [to] some degree, how quickly we can advance an AI? And maybe that's fine, but doesn't it slow us down a little bit?

You know, not really and I gave you sort of a compliance example about heavily regulated industries, let's talk about the consumer dimension where you don't have to go as far as to make it explainable. But think about that also for a second. We all know, even as consumers we get these annoying ads that follow us everywhere—if I happened to have shopped for a vacuum cleaner for example, right? And you're annoyed because it's something that's distracting you and you're like, "man, this stuff is creepy."

Now look, if I was given control even as a consumer, and I'm talking about a pure B2C type of environment, not even a B2B of a way you deal with businesses. Only a consumer-to-consumer or in a use case, you still have to build trust for the consumer. And so we look at explain-ability, so if I'm putting an insight from someone around, some recommendation for a restaurant or an event or a diet or an activity that they should do, at the end of the day I have so many things being told to me, but if I bring it in my context, and I'll tell you: “Here's why I think it's really important for you. I may turn un-explain-ability off because I start trusting the system, but initially to get going, what explain-ability does is build trust and then what that drives is adoption. What adoption drives is now you have more feedback signals and the system gets smarter, right? So I think it's intertwined, in fact we made explain-ability almost a core component of how we deliver AI and frankly this goes beyond just a business scenario that I mentioned to you.

So, would you go so far as to say that, like in Europe, it should be legislated to be an essential element of the system?

You mean in terms of our data? And GDPR and everything else…?

Exactly.

May 1st which is today.

Right.

Look, let's think about this. So there is absolutely a level of regulation that's required. There's no doubt, because you have systems and the ability today, to go absolutely nuts with the amount of data that exists and frankly you can go deep into social networks and extract information, which is downright creepy, right? So, when you talk about regulation and especially what's happening right now in Europe, I think you're going to start seeing some of that here with some of the recent events that have happened. I think they're not a bad thing because we may be crossing a line, and frankly getting the appropriate consent from the end consumer… and frankly I think it's a lot easier to get consent when you're delivering value to people. Things are going to start becoming more important, so, we'll see how this plays out frankly, and we're really going to be impacted because we do business in Europe as well and we're going to see how that affects our business.

Well last question on this one, if I went to Google and said, "When I typed the search 'best resort in Aspen,' why do I come up number five, and my competitor comes up number four?” I would think at this point Google would have to say, "We don't have any idea, like there's so many hundreds and hundreds of things at play, and you want to know out of 50 billion pages why you come up number 5 and they come up number 4, there's just no way to know that." So, should we hold the AI to a higher account than we hold people, because people don't really know why they make the choices they do either. And so if you ask a loan officer, "Why did you deny me?" the loan officer can tell you broadly, whatever, but what are your thoughts? Are there, in the end, systems that become so complex that perfect understanding of why they behave the way they do, doesn't exist anymore?

Yeah, I think you're right. The examples you've given, especially with businesses and being able to audit and tell people why, and so on, those are important on one level, but I think over time though this has become so pervasive that I don't think it'll be the reasons or the evidence or why this came up with the insight, I think over time what'll happen is, it's going to become so widespread in almost everything we do, we're going to start to see that, it's almost become the norm, getting insights and evidence and affecting what you do and where you go and how you live, and those will just become part of the fabric of how we consume insight. So if I'm correctly catching your line of thought and question, I do expect that today it's probably more important, because people have this level of distrust, or I shouldn’t say distrust as much as, "man this is AI, I don't know what's happening behind the scenes." But I think over time though, I think it's going to become very acceptable. I don't think in a few years people are going to be talking as much about AI as people are talking today, because it's almost like it's expected in everything.

Right, it's like how we used to have a web department at companies, back in 1998 or whatever. So, tell me CognitiveScale's mission, and why did you co-founder of the company? What’s the mission of the company?

So look we've been in enterprise software, and I say “we” because the founding team is a few folks that I've worked with over the years, and we share similar beliefs, we've been in enterprise software for 20+ years, but we've worked together for almost an entire generation in different, I call them different waves of computing. Our initial focus was business process management and how you can drive better automation across business, that's how we… all got together. We had the fortune through that process of getting to work with IBM, spent a lot of time at IBM. I truly learned the meaning of scale of a company like IBM, and this was also the time, I and some of the other founding team members, had the opportunity to get involved with IBM Watson. And truly amazed with how you could take a system and bring a whole lot of data to the system, and you know as you said before, go through volumes and volumes and volumes of information and see a system beat one of the best players in Jeopardy, right? And the thought that crossed our mind at that point, was, that's phenomenal that you can bring all of this data, but it makes an assumption that you know what question to ask, like in a Jeopardy system.

And the way we work today in society and the way businesses work, you don't know what question to ask, and our founding principle for this company, as we left [IBM], was just say can we flip the arrow? And we built a really smart system inside that, instead of you asking the question, can you ask it shorter and say, "you know what, this is what you need to know in the context that you are." If I go back to the WAZE analogy that I mentioned to you, so, that was a foundation of the company and we also had a second big realization that general focused AI is not there yet, the fact that I can ingest tons of data in different volumes and across different domains and so on is good, but if I'm going to build context, I really have to understand the domains, so I have to go deep within a community, and really understand the concepts, whether it's financial services or healthcare specifically, and just go deeper within those subdomains.

So you've got to make it very specific, and sort of the third piece was this notion of AI replacing humans. Our view has always been that, where AI's going to have the biggest impact is around augmenting this. So, Byron, long story short, but it's those three fundamental principles around switching the arrow, going domain specific and then ensuring that this is about augmentation, more than replacement, were the founding principles and the mission for this company. And we've been quite successful in driving this mission into the different industries that we're focused in.

Give me a use case if you can about "financial services and healthcare." Give me some real world results you've seen in the healthcare space.

Yeah, so, healthcare is one that's very near and dear to my heart, to most of us we think it's got a good societal aspect to it and when we started, it was around patient engagement. And we worked with some of the leading healthcare organizations, some of the largest cancer institutes, second largest, third largest healthcare systems in the country, basically trying to show them how you can use AI to really flip the arrow and almost become the concierge for the patient. And in doing that, how can you engage patients better because we were trying to prove essentially that better engagement of patients over time, will lead to better health outcomes. And so when you look at whether it's cancer patients or diabetics, we've been able to… I'll give you a few examples: We ran the study with one of the largest healthcare institutes where Type 1 diabetics were brought into our program and we were collecting information from medical devices such as their Fitbit as well as their wireless glucometers, right? These are fairly hefty devices that patients have to carry with them, and they monitor them, they track them, they get information from them, but it doesn't bring any context to them. Those are just numbers saying "your BGL level is high." Or it'll tell you that you didn't get enough sleep last night, right?

But if I was to bring this in the context of the patient, and in this case it was actually... our study group was kids who were transitioning from adolescence to adulthood and literally going off to college, right? And now facing the pressures of college, the stress that comes with studying in college and all the other stray points that they have to maintain, the fact they're now in a new environment, it completely changed the game, and so we said, "can we bring AI to these patients?" Where they were given our AI application running on their mobile device, which was in real time collecting information from their Fitbit device, it was collecting information from where they lived on campus, dorm or not, what their exam schedules looked like, and it said, "Look, I'm now going to surface insights for you when I start seeing that your sleep levels are low or your BGL level is low, I can start narrowing correlations, and telling you how this is likely to impact you, because I also understand the diabetes Type 1 protocol." And so when I start seeing these different pieces, I'm going to start surfacing, not just "your BGL's high, but your BGL's high and we think this is because you're in a high-stress environment, consider doing these activities, and you happen to live in this location, let me tell you what's happening around you." That's the type of study we ran, or rather real implementation that we ran with a study group of about 35 to 50 I believe, highly successful, in fact rave reviews we got back just in terms of, just the engagement of the population, now we're looking at rolling this out to a much wider audience. So that's an example in the healthcare space.

So you mentioned that you've wanted to focus on augmenting humans, and when you say AI, are you actually saying "augmented intelligence" here, or are you using that as shorthand for "artificial" or is there no distinction between?

Yeah we actually make it, so we said, "Broadly speaking, we are the augmented intelligence software company," and certainly our techniques that we use are AI techniques, but, it's almost like taking humans and saying, "what do you need to know?" So in this case here, the example I just gave you, there are care managers that manage these patients, and so what do I have to bring to a care manager or to the patient directly, assuming that a hospital system is, I'd say, at the leading edge, by that time, they're willing to push some of this information out to the patients, but at the end of the day, you're augmenting the intelligence of the individual, whether it's a care manager who cares for you, whether it's a financial advisor that's looking after you or whether you're the client or the end consumer, the patient, that's consuming the information, so we use it interchangeably but the way I described this is, for us AI is augmented intelligence and certainly the science of AI is the techniques that we use behind the scenes to make it happen.

So I'm way up there in believing that these technologies create far more economic opportunity than they destroy and more importantly they empower people, they make everybody effectively smarter, that anybody with an AI tool is effectively more productive and therefore can earn higher wages and all of the rest, and when you draw that distinction, when you're all about augmenting humans and not replacing them, it seems you're directly responding to a fear that people have that these technologies are going to somehow obsolete humans. I don't think that's going to happen, but I think there are certain things that you would say, "Well, once we build an AI that can monitor security cameras and look for people, we may need less people to monitor those monitors." So tell me why people, broadly speaking, shouldn't or should, but I think you would say "shouldn't" be worried these technologies are somehow going to up-end the social applecart of employment?

Great, great question Byron. So, look I can tell you this through examples and I'll tell you, because my fundamental belief is, right now, AI can be used for many different things, and I think there is a fear that this is going to replace my job and so on, and I do believe that at some level, at the very low level, task level, where it's very repetitive tasks, we're already starting to see some of that happen. But here's what's happening and I think this is the bigger point around augmented intelligence and AI. The fact that you can be far more productive in what you do if you're surrounded with this Jarvis suit, as we call it your Iron Man, Iron Man because of the Jarvis suit that he put on, and it takes you to a completely different level, that productivity can let you as a business drive much more in terms of whether you're selling more, whether you're reaching out to more customers, whether you'd be more productive as an organization.

So let me give you a few examples. One example, different from what I've described before, is employees within an organization that let's say, deal with an assembly line or on the warehouse shop floor, and they have to rely on systems today when problems happen, and the systems of AI, I ask them for information because I'm stuck and I can't figure out why this is not progressing. I'm not able to get an invoice or this is not reconciling, right? If I can tell you at that moment that "You're going to be stuck for a while, because you're going to put that information into a trouble ticket, someone's going to reach out to you in 2 hours, and you're basically, your 2 hours of productive time is essentially waiting.

If I told you that AI systems can now understand your context, listen to you, look through a lot of the documentation and almost come back and say, "Look I think these are the things that are happening around you, this is what you should do or connect you to a super user." We've been able to demonstrate by almost a 30% improvement in the productivity of employees that are in that scenario, just by augmenting them and bringing that information to them at their fingertips. That changes the game because you have very large consumer package goods companies, they're saying "If I could bring this to my shop floor, to my employees, I'm looking to grow, today I'm constrained by the fact that I have labor that can only do so much, but if I could actually show that these people were more productive, I can go and do a lot more business." So people are thinking a bit wrong, they should be thinking about, "what can I do now that I'm a lot more efficient in terms of business capture, the value capture. Then “how do I become more efficient on the back end, by capturing people?” I mean if that makes sense to you, in terms of example.

Absolutely. We saw that when the ATM was introduced. It lowered the cost of opening bank branches, banks opened more, and there are now more tellers than there were when the ATM was introduced. It's like anytime technology drops the price of something to zero like the cost of translating, people consume a lot more and the world actually now needs more translators than it did a few years ago because more people are taking their business internationally. So I totally get it. Why do you think though that there are informed people that’ll peddle that narrative? I mean there are people in the industry, in Silicon Valley that say, "We're about to obsolete a huge percentage of the jobs, and we're going to need the universal basic income.” Why do you think informed people have a whole different view of how that's going to unfold? What do you think, broadly speaking they get wrong?

Yeah, you know, frankly I think that's the wrong approach if I may say so because, look, again I go back to different people have different opinions on what AI can do. We take an opinion, and our opinion has been to do exactly around, to talk about the augmentation, the benefits, expansion, the growth, the types of new markets you can now touch because you have AI behind the scenes, and that message has resonated for us as a business, far better. In fact it also allows us to work within the organization to get the support and the buy-in from the executives and the individual workers who have the knowledge in their head.

So, when I hear someone say, "we're going to go in and change all of this," there's so much information that still resides in the heads of the actual workers. Your expert, whether it's your expert actuary, or your expert financial advisor, walking in and saying, "I'm going to replace them, and have a machine do what they do," is immature in my opinion, because it almost discounts the knowledge that's sitting in their head. Instead if you said, "Can I help augment you, and overtime as we learn more, you can go on to do bigger, better things, or service new aspects of the market, new elements of the market that you didn't before," to me is a far better message. So the different tasks, the different types of companies out there that talk about AI can be applied, and I do believe at a certain level, as I said, at the low level task, if you look at highly menial repetitive tasks, you can absolutely say, "I can replace you," and I think that's the right thing to do. But when I'm talking about business systems and driving a level of business function, you have to rely on individuals, because that's where the knowledge resides.

So you've mentioned general intelligence at the very beginning, and you said, "We don't have that. We may not actually even be on a path to create it, maybe something that only shares the words "artificial intelligence" with narrow AI, it may be a completely different technology, maybe not, I don't know.” So but there's a different kind of fear that's being peddled around that system, and that's by very notable people who say, "you know, it's going to kill us all." Elon Musk said, I believe, "there's only 5-10% chance we're going to survive it as a species," and all of the rest. So I have a few questions about that: 1) Do you think that kind of fear is justified, and if not, 2) Do you think it serves to slow down development of AI, because it interjects so much fear into people who are worried about it?

Yeah, that's another good question, because certainly we hear about that a lot, and I'll say this: there's a spectrum of fear and belief in what the AI can do and I think some people certainly are at the end of the spectrum that says "this is crazy, this is going to take over and have drastic impact on how we live and our interactions with machines, and all of the doomsday scenarios." I'm not there yet, I tell you honestly, I'm not there. I'm very much on the value that this can bring, but I do believe in one of the things that I hear from the nay-sayers, and the folks on the other end, which is, it has to be controlled. It has this notion of having responsible AI, having this notion of auditability, as I said earlier, explain-ability, evidence, but more importantly, the ethics around AI are going to start getting more and more important, because this technology is very rapidly getting to the point where there's a lot of things that it can do. And if left uncontrolled, then you certainly can, and I'm not suggesting this is anywhere in the near future, but you can proceed 10 or 15 years down the road, with fairly drastic sort of results.

That said, if we now begin as responsible companies that are in the AI space, start establishing a foundation for responsible AI, and AI ethics, then it's a good thing, and this is not just with the big players. This has to be a combination of folks like CognitiveScale and multiple other AI companies, working where the Googles and the Microsofts and the IBMs of the world and so on. And to that end, CognitiveScale actually has sponsored and we are the founding member or something called, AI Global, and the mission for AI Global is very much around building a level of responsibility and trust around AI. So my summary of what I just said is, “I'm not with the doomsdayers today, I believe there is a scenario where there's a good playout, but I do believe we have to start now to being responsible and the ethics of AI.

Right, that assumes... assuming that's true, that assumes that everybody wants AI to behave ethically and so, any number of bad actors who don't would... I mean you're suggesting a set of agreed upon principles, like we had in medicine and other things that everybody agrees we're not going to violate these things, but some people can. So, one very real concern is that something like 14 or 15 militaries around the world are figuring out how to embed artificial intelligence into weapons, and frankly to make autonomous kill decisions, and this is seen as kind of an upgrade from a dumb weapon, which might just be a landmine, which is very simple artificial intelligence, it just blows up if something 50 pounds steps on it, to something far more sophisticated, that may sniff for gun powder or recognize the uniform, or who knows what. Does the military application of narrow AI concern you?

It very much does, and I think this goes back to another broader scale, and look, I have been studying it for a bit. I've been reading up the whole notion of weaponizing AI. It’s concerning, and they are, at a country level, and I can't at this moment disclose to you, but it's a world level organization that we have actually started working with—I would say sort of taking a part in interactions with them—around even using a country-specific view around how we can make and drive the ethics around AI. Look, we're a small company, where I can tell you this, it has to start at the grassroots level. This cannot be, at the top level you have some of the bigger organizations making decisions around the ethics of AI. It has to be a grassroots level, it has to also be as you said, in healthcare you have standards, there are bodies, there's regulation. You have to do it now, and you have to do it in a way that it doesn't impede the progress of AI and the good that can come from that, while ensuring that the goal on the ethics around the bad are managed. So that's the best answer I can give you at this point, but I can tell you there's a lot that we're doing at CognitiveScale right now, to really drive this whole notion of responsible AI.

Let me give you two counter examples to that, and just to get your thoughts on them. The first one would be, somebody would say, "look, war is bad, nobody's for it, and we have to have weapons, so we make bombs, or a drone, and you drop that in and it just kills everybody." If we could somehow make a weapons system that was smart, and only tried to kill bad guys, why's that not better?

You're right, I mean, look, at the end of the day, war is war, and that's happening right now, as we speak, regardless of AI or not. Look, the dangerous aspect of this is not so much the fact that you can do it in a better way, it's what happens when it gets to rogue states, where there's no control. And that's sort of where I'll leave it, because you have the institution with nuclear weapons, the fact that I have those weapons, and I can deploy them in a good way, and you can use them for really good things, but on the other hand, you have rogue nations that can tract others and do crazy things that you can't explain at this moment, but I would sort of use that analogy to say there's always the good and the bad. But, again it comes back to the responsibility that controls the governance around how AI is used, and it applies broadly across not just military, but military, society, how people consume this, there's a whole layer of ethics and control and governance around AI that I think has to thought through now.

Autonomous vehicles also, if I was to extend this right, who's to say that, who makes the call when the vehicle decides that I have to save the passenger that's in the car, and optimized by then killing the person on the road, right? So there's a whole bunch of thinking and control around how AI systems are used. It's going to change the whole legal language around how AI's done, things that people have just started to get their heads around. All of this is happening Byron because AI is now reaching the point where people are consuming it, they're starting to see the value from it, and so now they’re getting to this different level of thinking around all the other stuff that comes with it, such as the usage, and the legal aspects, and the contractual aspects and so on.

So, for 2,000 years we've had code makers and code breakers and then the code makers make a better code and the code breakers break that and that's like this dance. In the world of online security, do you think AI helps good actors or bad actors more? Does it help bad actors break into systems, or does it help good actors spot that behavior or is it we just raised the whole struggle one more notch and it continues to be kind of locked?

That’s another great example I think, of where you'll have the good side of using AI for good and then on the flip side you'll have people misusing it for bad things, right? Look, we're not specifically in the cybersecurity space. I know lots of companies that use AI in that space, but we certainly deal with it, when we work with a set of systems for our clients. I think using AI to even today do the good around cybersecurity to figure out exactly where hacks or the imperfections in their existing security is happening, is a good thing, right? And I think, look, I'm a more… personally Akshay Sabhikhi falls in the camp of being a little more optimistic, but with the right controls around it, and I always believe that good things can come out of it, but you've got to make sure that you're always keeping track of where people will misuse a certain technology. We've seen this over and over again, with other aspects of waves of computing that have come in where, they seem really good, there's lots of good that comes out of it, then you have the flip side where lots of damage can be done, and I think AI's no different. Except the biggest difference here, in my opinion right now, is that the power of a technology like AI is immense, on both the good side and the bad side. On the good side, its ability to transform industries I think is going to be at a rate and pace that we haven't seen any other technology do. On the flip side, which is exactly what you're bringing up Byron, this can be used in pretty bad ways, right? Which is why I think the whole governance side, the responsibility side, and the ethics side has to be thought through now and soon.

Well let me ask you a couple of broadly philosophic questions, I assume you followed the AlphaGo match with Lee Sedol?

I did a long time back.

Right, 2 years ago...

Not fully...

But basically there was a pivotal moment in this one game, where AlphaGo made this move, and everybody watching was like, "what in the world is that thing thinking?" And then the DeepMind people looked inside and said, "only 1 in 10,000 humans would have made that move." And Lee looked at, and he was blown away, and in that moment people started talking about the creativity of AlphaGo. Do you believe that machines can actually be creative, or can they just simulate creativity or is there a difference between those two things?

That's a good one. I think machines can be and over time will become creative. I think again it's about, look, about 3 or 4 years back, nobody would believe the type of learnings and the ability for a system to uninterrupted, not stop, not re-train, but look at feedback and get to a level of learnability, that was unthinkable about 5 years back, but we've demonstrated this with some of the techniques that we've used, I think we will get to a point where there is so much information being fed to the machine, but I feel like it will get to the point where it's coming up with new options that humans would look and say, "never thought of that." That was a perfect example two years back, but I didn't follow the thing all the way to see the entire thing, but I did read enough about it just to know that the creativity aspect is definitely, it's coming. I can't say when, but I think of enough techniques right now that can get us to the point that in the next few years you start seeing fairly interesting combinations of things. But humans would say there was no historical evidence or any other information backing that up, but the machine came up with a suggestion that, over time will play out and we'll say that this was actually an extremely creative, and viable solution...

But if that is true, then that is not explainable is it? You can't explain why the machine came up with that, so you kind of lose explain-ability at that point. You just say, "Well, the machine's just creative, they just figured that out, I don't know why...no human would have done that." So doesn't explain-ability run contrary to a creative machine?

Actually I don't think so, I think it actually runs with it, because at the end of the day, the fact that you're coming up with creative solutions that hasn't been thought through before, or there's no prior set of actions that would've directly led to this recommendation by the machine, doesn't go against the explain-ability aspect because explain-ability to me is really saying, if I looked at 7 different pieces of data or if I looked at 4 different things in terms of actions, and I was able to combine them and come up with... I'm giving you a full trace around how I came up with the action, right? So, to me, explain-ability is almost an important, inherent capability within an AI system that is able to explain how it deduced and came up with certain prescription. I think that is different than saying, "I came up with the prescription that has never been done before, but trust me, this is likely to work.” So I actually think explain-ability is an important ingredient of, even the example you gave in terms of with the recommendation prescription, explain-ability should just tell you, "how did I actually come up with that, what algorithms, what was the source and the evidence of the data that I used to construct and come up with a certain scoring or ranking or clustering algorithm that led me to a certain outcome, right?" So I look at this as sort of it goes hand in hand with AI systems.

So we're coming up on the close of the show here. I do want to ask one more question. This is unrelated to intelligence, per se, but humans are obviously conscious, like we feel warmth, whereas a computer can only measure temperature. Nobody ever says a computer can feel warmth. If you believe in creative computers, do you think we'll ever have the possibility of a conscious computer, and if not do you think computers can be conscious philosophically?

Good one. Look, down the road, if you think about how these things will evolve, it's a great question, frankly, what is consciousness, at the end of the day...?

Well, it's the experience of something, you taste a pineapple and you actually experience it, whereas a computer never tastes anything.

Yeah, you never taste anything right, so, look, just think about you could say, a few years back you could say, "computers don't see anything, or what they see is essentially images and then they transfer that image." But look where this stuff has already come right? You're able to get into highly complex images today, and down to the fairly, highly sort of descriptive details of what’s happening in that exact scene, how can I recreate what's happening there. Computers are able to sort of assimilate and now understand that, whether it's an image, whether it's a video, whether it's sentiment or it is intonations in a voice, right? So you're able to bring all of that stuff today, and at least understand it. Now consciousness actually is something, is several levels higher right? Which is, (a) if I can understand it, that's step one, but can I do something with it, and can I react as things change, do I have emotions then go with it, and so on? So I would say it's more than just... it's actually can computers have emotions? Can they actually, react or give you positive and negative feedback in a way that humans do? I think that's sort of the bigger question. I think the way things are advancing right now, we've already come a long way. I can absolutely see down the road that you could have a level of emotion and original reactions from computers as they see images. They are already seeing and deciphering images, but imagine the ability to react to that, and say "I like this," or "I don't like this," and "I have my own mind now," which is different from the way you think as a human right?

Akshay, it's been a fantastic time chatting with you. Tell us how we can keep up with what CognitiveScale's doing and you as well. Do you tweet or blog or anything like that? Can you just close us up?

Yeah you bet, so look us up: www.cognitivescale.com, we're based out of Austin, Texas, and we're very active on Twitter, we're active on LinkedIn. You hear us talk about case studies, we're big into the practical application of AI, so if you follow us, you'll get a chance to just see what we're doing first hand with the various industries we work in, to drive real business outcomes, because that's really where I believe the difference is between what we do and a lot of other companies.

Thank you very much for being on the show. You're welcome to come back any time you like, and keep doing the good work.

Thank you Byron, pleasure.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.