In this episode, Byron and Tiger talk about AI, augmented intelligence, and its use in the enterprise.
Tiger Tyagarajan is the President and CEO at GenPact. He holds a degree in mechanical engineering, and he also holds an MBA.
Byron Reese: This is Voices in AI, brought to you by GigaOm, I'm Byron Reese. Today I'm so excited my guest is Tiger Tyagarajan, he is the President and CEO at GenPact. He holds a degree in mechanical engineering, and he also holds an MBA. Welcome to the show Tiger.
Tiger Tyagarajan: Byron, great to be on the show, thank you.
So let's start, tell me about GenPact, what your mission is and how it came about.
Our mission continues to be, Byron, to work with global enterprises in a variety of industries, to actually help them become more competitive in the markets they are in. We do that by actually helping them undertake change agendas—transformation agendas to drive value for them—either by helping them drive growth or better pricing, or better risk management or lower fraud, better working capital, better cash flow etc.
Our history goes back to when we were set up as an enterprise and 100% subsidiary of the General Electric company (GE) in the late 90s. Then in 2005, seven years into our existence we spun off into a separate company, so that we could serve other clients. Today we are about $3 billion in revenue, serving 700 clients across the globe. GE continues to be a big relationship of ours, but only accounts for less than 10% of our revenue, as compared to everyone else that accounts for the balance of about 95%.
And tell me, you're using artificial intelligence to achieve that mission in some cases. Can you talk about that, like what you're doing?
So Byron, early days, I would say about 5+ years back, we came to the conclusion that digital is going to pretty dramatically change the way work gets done along many dimensions. We picked 12 different digital technologies to actually bring into the company, build capabilities, and change the way a lot of our services get delivered, and a lot of the way work gets done by our clients, and one of them we picked was artificial intelligence. Within the family of AI, we picked computer vision, we picked computational linguistics, we picked machine learning, three examples that are very relevant to the kind of services we offer. We've gone down the path of building those capabilities, acquiring those capabilities, partnering with other companies in the ecosystem on these capabilities, so that we can change the way work gets done and services will get delivered, in, I would say, a dramatic fashion that I would suspect some of us could not have imagined.
Well, don't just leave it there, give me an example of something dramatic that's happened.
I'll give you a couple. Some of the clients that we deal with are banks, and think about a bank that is in the business of small and medium business lending. So half a million dollar leases for equipment or a loan for equipment to a mid-market company, that is actually manufacturing a product somewhere in Ohio etc. And the way the small business lending world works is that the customer gives to the sales person a bunch of documents, and this would be financial statements of the company, cash flows of the company etc. A lot of those documents are produced by these companies, in their own way they are audited by a small audit firm, somewhere in the vicinity and therefore they are written up in different ways, with different accounting standards and so on.
Now when a bank receives it, typically they would have to change it to actually match their understanding of cash flow the way they define it. They have to recast all the numbers, they have to read the footnotes, and then after a few days, they have 5 questions to ask, so they go back to the customer, ask those questions, and finally [it] takes about 15 days, 20 days in some cases to say, "hey customer, I've given an approval for half a million dollars, go buy our equipment."
Now, in today's world, that is way too long. Now if you bring in a combination of being able to read those documents, read unstructured data, read the language in the footnotes, interpret it using computational linguistics that then converts it into a specific standard financial statement in the way that particular bank understands financial statements, the way their definition works… You could actually argue that I could take a decision, the bank would take a decision in 30 minutes.
So think about the ability to tell a customer that your application for a loan to buy your equipment is approved in 30 minutes versus 3 weeks. I mean that makes a huge difference to the small/medium enterprise, that makes a huge difference to their business, their ability to grow, and if you think about the U.S. and if you think about small/medium enterprises in the U.S., that is the backbone of this economy, we're beginning to see the use of this in a number of banking relationships.
I would say it's still early days, and I would say it could make a huge difference to the top line of the banks, to the pricing power of the banks, to the ability to actually satisfy your customer dramatically. I think that is a great example of some of the ways that service changes versus a human being spending a lot of their time in actually passing the data before they take a decision. Now in the end the decision, by the way, is still taken by the human being who brings their expertise which is why we think about AI, and it's always a combination of man + machine.
You know, this may be a little bit of a tangent, but that calls to mind current legislation in Europe that says that if an AI decides something against you... declines a loan, or a credit card or something, you have a right to know why that happened. Are you saying the way you think of it is, the AI is just helping organize the data better, it's still a person that makes the call, and if that's the case, people aren't necessarily very good at saying why they did or didn't do a given thing right? Or, how do you see all that shake out?
Well it's actually a very, very, very important point that the world is just beginning to grapple with and understand. Which is why the human in the loop and man + machine is something that we are big believers in, that you cannot really, unless it's pretty obvious, and the algorithm is pretty clear, take a human out of the loop. So the way we would think about it in that situation is, the AI is really augmented intelligence. I know the world thinks about AI as artificial intelligence, we think about AI as augmenting human intelligence.
So think about a human being and all of a sudden, the human being can sense more, can use that sensing to predict more, to actually do all of that real-time, with fabulous customer experience. Then the human being says, "okay, I think I know exactly what decisions to take, and I know why I'm going to take it." So it goes back to, if today a risk manager in the absence of AI takes a decision, they do it on a certain basis that they can put down on a piece of paper and justify.
Tomorrow, they will actually have to say two things. They say, "I got this input from AI, from the machine and then I use my following 7 other things, to finally say yes or no." Now here's where the rubber hits the road. I think the next question's going to be, "Tell me why a machine gave this recommendation? Can you do traceability of the decision of the machine?" And I would say that's one of the big sciences that is still being explored. The specific example I gave you and the platform that we have, we did an acquisition a few years back of a company based in Boston that actually had built this on computational linguistics, and it has traceability. That's one of the big things that our banks and our risk leaders when we work with banks, and chief risk officers look for.
In fact when you deal with regulators—it doesn't matter which industry—when you're dealing with the FDA on adverse event reporting, which is another example of a use for AI that we have. It means that you better show how the machine helped you take a decision, so can you trace that thing down. Now as you get to neural networks, as you get to that level of decision-making, it becomes a little queasy to be able to actually say exactly how this decision was taken, but in the world of computational linguistics that we have, so far, our platform is able to do it. But it's a big question still to be answered in the world as AI becomes more ubiquitous.
Well I think you hit the nail right on the head. I think you can kind of see it as a continuum. On one end you have an AI that tries to diagnose a disease, and you definitely want to know why it thought that. On the other hand, you have an AI that suggests where you should go for dinner, and maybe that's not as important, that you understand the ins and outs of that algorithm.
So do you think though, that traceability slows down the progress of AI? Let me make the case really quickly. if I called Google up and said, "For the search I'm targeting, I'm number four, and my competitor's number three. Why are they three and I'm four?" And I assume Google would say... "we don't know, I mean there's 600 different algorithms going into there. It could be any number of things." So if you said to Google, "No, no, you must be able to say why I got ranked four and you got ranked three." Doesn't that inherently limit AI, are we just going to have to live with that limitation?
So again a great point that you raised Byron. I'd start by saying, any technology has an opportunity to really add value to humankind, and add value to the world in the long-term, make life better, make our planet more sustainable. That's been the journey of technology over hundreds and thousands of years. However, we all know that that technology can also be used to harm humankind, too, in the wrong hands do the wrong things.
Now by definition therefore if you want to find a way to increase the former, and reduce the chances of the latter, you need governance, you need mechanisms of governing bodies, you need rules to be followed. One could call them bureaucracy, one could [say] hey that's going to slow technology down, and the answer is, "yeah of course it does a little bit." So, I would say it is part of the journey that most technologies go through, I think the reason why AI is so front and center in this journey is that it is more ubiquitous than many other technologies, even before it becomes ubiquitous. That’s the fascinating thing about AI is that if you got back to the pre-world of electricity, no one really thought that electricity would become ubiquitous and then today electricity is, I mean you can't live without it.
In the world of AI even before it has become all-pervasive, I think the world is talking about the fact that it will become all-pervasive. So therefore, conversations on ethics, conversations on governance on misuse, on traceability, are all on the table. It could slow some applications down, that you can't launch without actually making sure that you have dotted the 'i’s and crossed the 't's, but I think it's good for humankind. Otherwise you would have a series of blow ups, they may not be very big blowups, but that would further slow something down, so actually a little bit of “go slow to go fast” is not a bad idea in this particular space.
Fair enough. So you've mentioned three areas of artificial intelligence or augmented intelligence that your company is focusing on. I'd love to go through each one of those, so first you offered computer vision. What are you doing there and what are the challenges, and what's the business use case, and what results are you seeing and all of the rest?
So it's something that all of us can relate to, so I'm going to spend a couple of minutes explaining the situation. So think about any fast driving on the highway and meeting with, let's just call it a fender bender, and so it's not a big accident, but we have a fender bender. Now typically you would call a 1-800 number, you would have to then wait for someone to take photographs or you have to take photographs. Today it becomes a little better because you can take photographs and send it in. But the world of AI and computer vision gets to the point where you press an app, you go around the car because the app actually then tells you exactly where to go and what pictures to take.
You're directed to take let's say 30 different snaps of the car in different angles, and then at the back end, as those pictures come through the wires, it's hitting a massive database of all kinds of fender benders and all kinds of accidents of all kinds of cars going back into history, and lo and behold, the algorithm runs at the background and says, "I think I recognize a pattern here that I'm going to call out, and I think this car is going to cost $732 to fix this insurance claim. So here is an approval for you Mr. Customer or Miss Customer, for $732 and you've just got approval."
Think about, "I'm in an accident, I'm sitting out there, and I get an approval on the spot for my claim." It is possible. Experiments are going on right now, and obviously these have to be fine-tuned a lot. Data is very important, without data, algorithms in AI are pretty meaningless and worthless, and then of course, we are big believers that domain is very important.
Do you really know how much and what kind of data to look for? It's important to go inside the car and look at the dashboard. It's important to go inside the car and see if the steering wheel and the suspension have a problem. So all of that is domain, domain intelligence about the car, domain intelligence about auto insurance, domain intelligence about the Honda car or the Mercedes car etc. etc.,
So the combination of domain and context with data is what gives amazing power to AI. And if you bring that together in the context of computer vision, which is actually one of the more advanced AI technologies out there, I think you can change the world of auto claims management for the benefit of a huge number of consumers, for the benefit of insurance companies, for the benefit of everyone. And I think it's a use case that we are progressing on. It's still not industrialized, it's still under pilot and test, it'll take some time. I'm sure many others out there in various industries and the insurance industry are progressing on the same. And I'm sure we'll have our lessons as we go forward and go through these pilots and subsequently industrialize it, so that's one example of computer vision.
Excellent. And the second one was computer linguistics. Are you talking about conversational technologies like chatbots, or are you talking about voice recognition or what's your area of specialization in there?
So computational linguistics is actually language, so obviously it could be spoken language whether it is a chat or a conversation etc., but you could always convert that today with very good technology from speech to text. So let's really focus our attention on text, human language.
Human language, amazingly enough, is one of the more complicated puzzles for machines to unravel, for AI to unravel, because there are so many nuances, there are so many ways in which expressions are used etc. etc. So the use case that we have been working on and actually we've had it industrialized, is the one that I went through which is, can it at least read balance sheets and P&Ls with footnotes, written in English language, but written let's say in 10 other languages, to convert them into financial statements that have a standard form to [them] and a standard definition. And the answer to that is, yes it's possible.
So taking a look at unstructured documents, think about the world of business, the world of business probably has billions of people trying to read an unstructured document and make sense out of it, and then compare it to something else, so I'll give you another use case of this. So think about a large retail grocery chain that spends a $1 billion on transport as it supplies groceries from the wholesale distribution unit to its spread out retail chain across, let's say the U.S. And it uses transport contracts -- probably 7,000 contracts with 7,000 transporters across the U.S. Some contracts are large contracts, some are short contracts, because it is from one city in Chicago to the suburbs of Chicago.
Another could be from San Francisco right up to Chicago, that's a long transport contract. Now, every day, that retail chain would receive hundreds and thousands of transport bills, and the transport bill would say, "hey, for transportation that I did for you from Chicago to the suburbs of Chicago you ask $230 for that truck," and sitting inside of that bill is a variety of line items. Let's say one-line item is: the truck got delivered to the distribution retail outlet at 8.15pm, therefore the customer has to pay 30% overtime charges for the driver. Which could mean that bill goes up by $22.
Now, in the world without AI, when that bill comes in, someone quickly scans the bill and makes sure it's paid. Actually in the world of AI, you would actually read the contract, the machine would read the contract and actually be able to pick up the language that says, "this particular contract for delivery by this transporter from Chicago to Darfield in Illinois, the overtime kicks in only at 9pm, not at 8pm, therefore this bill of $22 extra is not right, so therefore the bill has to be $230 minus $22 and that's what should be paid."
I mean the number of situations where you have unstructured data in documents and other forms, that then has to be compared with structured data, bills or claims or financial statements, and then an answer has to be provided and a decision has to be taken. So one way to think about it is, in the world of decision-making, where prediction and decision-making is important, and human beings used to do it, there is a premium on how much time a human being can spend on things and there's a cut-off.
Unless something is valuable enough, it's too difficult for a human being to spend time. A machine can spend all the time that they want because the machine can do it very fast at almost zero cost, so therefore prediction and decision-making in that environment using language and data becomes ubiquitous and at almost zero cost. That changes the world of so many things. If you can make the cost of prediction zero using AI, think about where you can use prediction. You can use prediction in so many places you don't use prediction today.
So you know the difference between narrow AI, which is AI that's built to solve a problem, and this idea of a general intelligence of personal AI that is as versatile as a human. Take that to your linguistics example. Do you think, when you talk about contracts, reading contracts or financial statements (we'll go with that one), you have a really strained vocabulary of words that are going to be in there? There's fiscal year and cash flow and... but you're not going to have ice cream sundae and all the other things, so is it your belief that that's doable because we're dealing with really constrained vocabulary, or do you think we're going to crack the Turing Test problem and we're going to understand general rules for reading all kinds of stuff?
So I'm going to use the phrase that you used, which is "you hit the nail on the head." I would say, today the world we grapple with is the world of artificial narrow intelligence. I personally and the company GenPact are big believers in narrow application of AI, to augment the way human beings take decisions and do prediction in order to improve three things: experience of users and customers, dramatic improvement in experience, almost to a point of saying, "Wow." Second, cycle time, I mean you almost can imagine, almost everything becoming real-time, and the third is the ability to really predict, so that decision-making becomes so much better.
It's at the imperfection of those three, but it's narrow, which means you're trying to solve a specific problem. The problem could be, "Hey, I want to identify specific cancer tissue in the specific portions of the human body that I'm going to feed you data on." Or, "I'm going to find a way to solve for the signals that are coming out of this millions of data," and by the way, it's a needle in a haystack search for a very very faint signal that tells me that maybe this particular drug that is being prescribed under the following, very unique conditions, creates a problem that is a real human problem in the world today, but that's a very specific problem.
Our belief is, the world of AI has so much opportunity in its narrow application, that that's where I think a lot of our time and the world's time is being spent on right now. And that's why it requires data from that domain, and it requires intelligence of the people who understand the domain.
So to your point, what does fiscal year mean? What does fiscal year mean in Japan, or in India or in Europe or in the U.S.? It's different. By the way, what does it mean when somebody says, "I changed the fiscal year this time to 9 months, because I had to change the fiscal year for a deliberate reason." How do I identify that? But that's a great example where some very clever finance people have spent time with the machine to actually help it learn the language better, to then be able to apply it.
So the computational linguistics platform is the base platform without which nothing can happen. But unless domain is built into it, it can't solve a specific problem. Then finally to your question: Is there a world where ultimately general intelligence, AGI will happen, and is that going to be able to solve any problem at a more horizontal level?
I'd start by saying I'm probably the wrong person to answer the question because I haven't studied it, at a level of science for me to be capable of answering the question, but I'll give you my view. My view has always been, in the long run, we all underestimate where technology's going to take us; in the short run, we overestimate. So in the long run, I would argue that language is going to get decoded, almost to the point of generality, the ability to read and understand any language and understand the context etc., is going to get better and better and better.
But it's still far away from extracting emotion, empathy, deterministic decision-making, and this is one of the interesting things about AI. What if the human being, the pattern says this is the decision you have to take, the machine says, "this was the decision you have to take." The human being says, "I got it, I'm deliberately going to take the opposite decision." Only a human being can use willpower to take the opposite decision to the pattern the prediction is telling, that is still... I don't know whether an AGI would ever get there.
Well I think you're right in that. When I think of an AI, a narrow AI reading something and "understanding" it, and I use that word very cautiously because the computer doesn't understand anything, but, in the sense that we use the word... I've seen it work very effectively on recipes, because they have ingredients, those ingredients have an ounce, they have temperature, they have 30 or 40 terms: like sautée, julienne, and it's highly reliable for you to feed recipes into an AI and have that turned into a database.
And I just wonder, from a practical standpoint, if you want your AI to evaluate contracts and just say, what am I agreeing to do... did this person fulfil the contract? I wonder if we're going to get to a point in any reasonable time, where we have enough faith in it, because a contract has, like you said, all of this nuanced language in it. I hear what you're saying about human time is expensive and computer time is cheap. But the savings we get by using computer time have to pay for all of the inaccuracies that it may have.
So do you think that, for a long time to come the computer's going to try to take that contract, turn it into structured data, but a person's still going to have to go through and check it all, or do you hold out hope… because I think contracts are a great case, like, I get things right now and the numbering isn't even right, it'll have 0.1/2,4,5 and 7, because something got edited. Or they'll have dates in a European format sometimes and then in an American format sometimes, like the simplest things in the world still get through, and as much as I would love for this world where the machine is turning unstructured data into structured data, I just wonder, when it's going to happen. It sounds like you're actually working on real use cases, so that's why I'd like your thoughts on that, so give me a predication of what'll happen next year, on a reasonable time horizon?
I would say we're very early in that journey, so therefore, some of the use cases that I just described are actually use cases that are still not fully industrialized. I mean they're still experimental… when we work with a client, they often start in any of these examples with "let's start with one little portion of my business, by the way when we start, let's make sure that actually we are doing a parallel run for the next 6 months, where every decision that is being taken by a machine, is actually compared to a human being taking that decision. Let's see how that plays out. Then let's add something else, then let's add something else."
So I would say we are in the early phases of that journey. Therefore I think there is a long way to go in terms of really critical human decisions being taken over by machines. So that's one way I would answer the question. The other way I'd answer the question is that there are so many simple things that machines can actually take over and do, and there is so many of them, that then allows the human being—which is why the word augmented is so important—that then allows the human being to spend more time on things they actually should be spending more time on.
So think about the large finance schemes in companies, really thousands and thousands of people, CPAs and financial experts, sitting in companies, whose job it is to analyze financial information, sales data, market share data, pricing data, etc. etc., and actually come up with, "Here's where we think the world is going, here's what we think is happening in the business, here is the trade promotion this customer ran on this product and here is the impact of that trade promotion on sale etc.
The reality is, most of those teams spend, if you do a poll, 70% of their time collecting data, cleaning data, doing what we just described. Oh there's a European date, this is not an American date, and so on and so forth. And finally the last 30% of the time there's a scramble to add value. If that 70% gets done a lot by a machine in the more simpler tasks, the human being can expand their value, because the time available to them expands so much, so I think there's a big journey on that now.
Having said all that, we all know that these are geometric progressions, these are exponential curves, so I would say that sometimes over the next few years, some of us are going to be surprised by some of the things that machines can do and will do. I would still argue that there is so much a human being could do, if only they could do it fast enough. There's so much that a human being could cover, if only they had enough time.
Not to talk about billions and billions of people in the world, for whom a lot of things are not even accessible, I think AI is going to make so many things accessible to more than 2 or 3 billion people who otherwise may not have access to healthcare, may not have access to finance, may not have access to contracts, to insurance, because when the cost of prediction and the cost of delivery of that becomes real-time and zero.
I mean think about healthcare, let's take the country I grew up in, India. I would argue two thirds of the [people] in India, don't have access to great healthcare because they don't have access to great diagnosis, because diagnosis is too expensive, it's too far away etc. etc. Now think about hand-held devices being able to understand data that it measures, pretty much on the ground in a highly distributed, democratized way, and for that to be able to predict diabetes, the fact that you are going to get diabetes 15 years from now, or you're going to get blindness in your eye by the time you hit the age of 35. I mean these are two chronic illnesses for example that countries like India face.
You could democratize and make the cost of healthcare so low that everyone has access to it. So in other words, they are going to destroy jobs, and at one level the answer is yes, but at another level, there is so much to do, the world has so much value yet to be created. There's so many people who have no access to it that this is one more major breakthrough in technology, like electricity, like the steam engine, that is going to change the world in many, many ways.
You let that go by that you think AI's going to cost jobs. Do you mean that it's going to destroy some jobs and create some jobs and the effect nets out, or do you think we're actually going to have substantial net job loss from the technology?
No, I'm a big believer that it will be a net gain. But that's no surprise, because I'm an optimist at heart, and I also look back at history and say, "it's always been the case." Of course this is different because it is probably faster in its cycle time, of course it's different because it's impacting the generation that is getting impacted in its lifetime, and those are two big differences between the industrial revolution, and let's say the revolution of electricity and so on, and even the computer evolution, the evolution of AI, could actually impact people who are 30 years old today, where at the age of 40, they may have to re-invent themselves.
But at the same time, I'm also a big believer in just the entrepreneurial and innovative and survival spirit of humans. I think we'll break through into a new world, where we use all of these things to do new things which we can't even imagine. It's unimaginable. I read a research article published by a professor at the University of Toronto, who I follow very well on the topic of AI etc., I think the example that I read that he gave just fascinated me, and he talks about semi-conductors, basically making mathematics and arithmetic, zero cost. And what that did to photography, it went from chemistry-based photography in the world of Kodak, to digital photography in the world of binary zeros and ones. And as a result, a whole industry got destroyed.
Kodak and all of the people involved in that industry got destroyed, but a whole new industry got created in digital photography. The number of people involved in digital photography I think far exceeds the number of people involved in chemistry-based photography. You and I take a million more photographs in our lifetime than otherwise we would. We use photography for things that we could never imagine. We use photography to actually pay insurance claims after a hailstorm, because we can assess roof damage based on photographs from satellites. Who could ever have imagined that?
So I believe AI is going to do to prediction, what semi-conductors did to mathematics, and we're going to use prediction where we can't even imagine today, and that's going to create jobs. So, in the short run, like most technology revolutions, there's going to be pain in groups of populations who are going to go through that pain because their jobs are no longer needed. New jobs are needed, so there's an element of retraining, reskilling, relearning that has to happen.
New kids that go to college have to get prepared to learn new things, otherwise they will not be valuable in the workforce of tomorrow. All of that has to happen, and I think the responsibility then falls on the individual, on companies and enterprises across the globe, including our company, and it falls on governments and universities., All of us have to retune ourselves to the new, new world. I'm a huge optimist, there'll be net job creation, and the problem is, I don't think we can write down what those jobs are.
I would agree with all of that, but I don't even think you'd have to go back to the industrial revolution. You can go back just 25 years ago to the creation of the Internet, and it's changed so much right? And it's upset the applecart, and overturned a bunch of industries, created a lot of wealth, created new opportunities, created a million companies. It's made Google and Amazon and Etsy and eBay and Airbnb and all of the rest, but do you think it was disruptive to employment? The employment rate remained low throughout the last 25 years, right? And so, do you think whatever we saw in that 25 years with the Internet, may be a good indicator of AI? Or are you even saying AI is going to be more disruptive of employment than the Internet was?
I think it's a fabulous point Byron, because here is what history has shown us in the world of the Internet. There was a theory of the case that bank branches will dramatically decline, and disappear, that you won't need bank tellers at all. That theory was postulated when people said "hey here's the ATM, and there's web-based banking, and mobile banking and you're done."
Well, there was a period when actually the number of branches and the number of tellers actually initially grew. And the reason it grew was because more people started banking, it was easier to bank and in the process of more people banking, and in the process of people going to the ATM machine, they actually had some more questions. Therefore they walked into the branch with the one teller sitting there, who no longer did the old job of counting cash and handing it over, but did a new job of actually answering some complex questions and so on and so forth. So actually the number of bank tellers went up, now, I think today tellers are going down, branches are going down but it's been a 20-year journey.
The only thing that I wonder, and I don't have an answer to your question, I'm actually a little bit copping out when I tell you, I don't know, part of me says, that actually will go to a similar, not- that-disruptive journey, where unemployment rates will remain in the band [they’ve] been in, give or take a few percentage points. But the other part of me says, "is this a far more exponential curve than anything that's ever been? Is that a combination of forces here, of Moore's Law, storage, speed, availability of data, electronic data, and the ability to compute at this pace and speed, have all of that come together to almost create a fusion reaction, or fission reaction?”
I don't know, something tells me that human beings are human beings because of the entrepreneurial spirit. We'll find a way to create value, even in that journey. Now individually, I think people are going to get impacted, some people are going to be left behind, just as all revolutions have left some people behind. I think society has to find a way to bring them along to the extent they can. Some of the things that they've seen across the globe, from populations rising up and saying "we're getting left behind," and this has happened in so many economies, I think is partly a reflection of some of that technology revolution and we're not even talking AI right now.
So, you mentioned a third area that you're specializing in, it was computer vision, computer linguistics and then machine learning. Can you just talk a little bit about… I mean yours is an incredibly large company and I assume you have lots of things going on there, but can you just speak to either your overriding philosophy of what you're trying to do, or some specific cases or techniques and all the rest?
So, I'm not a deep technologist. I'm an engineer by background, but I've spent a lot of my time trying to solve business problems, so I'm going to not venture too close to answering the technique question (so I'm going to leave that out), but we are beginning to use machine learning in areas where, by using machine learning, we can get to answers faster, and in reams of data, that otherwise human beings would have found more difficulty learning.
I'll give you a use case that we are working on, and actually it is in pilot with a couple of pharmaceutical companies. In the world of pharma, for every drug out there, from every pharma company in the world, in every jurisdiction in the world, every customer and consumer of every drug in every jurisdiction who sends in some information, in some form or fashion, saying, "I had a tablet, blah, blah, blah, last night... and this morning I don't feel that great." And I send a Tweet or I make a phone call to an 800 number, or I send in a note, I send in an email, all kinds of ways of sending in unstructured data.
From a regulatory perspective, every pharma company is supposed to analyze the information that comes in, and then report back to the regulators what their view is about that particular incident. So think about millions and billions of incidents coming in every day, and you've got to parse it, you've got to define it, you've got to bucket-size it, and then you've got to report it, and then you've got to watch for a couple of signals. You could imagine the amount of false positives here, I take a tablet, I have a glass of wine, I'm not supposed to have a glass of wine with that tablet, I have a headache the next day morning. I send in a complaint saying, "hey I had a tablet and I have a headache." "Well, by the way you had red wine. You shouldn't have had red wine, so it's your problem."
So, in that world, human beings today have gathered that information, analyzed that information, compared it to past ways we've handled those cases, and dictionaries of data definition, published by the FDA and by various regulators across the globe, and then prepare the report to the regulator. Think about the effort, but also think about the time that it takes to do that. And the amazing thing here is that, the longer that takes, it is potentially possible, that some unique event that you should have grabbed hold of 7 days earlier, you are now looking at it 7 days later, and that could mean lives.
Now apply machine learning to all the data coming in, and because the data is so fragmented, it's so inaccurate in the way it describes itself. How do you use machine learning and past information to create the feedback loop? So as more and more information gets processed, the machine gets cleverer and cleverer, and therefore the data gets better and better in its output. You do that for one customer, the second customer comes in and says, "I want to do the same thing with the same machine."
So the machine starts doing this for 10 companies in the pharma space, and all of a sudden the machine has really become clever. The first customer gets the value of the other 9 customers and so on. That's a great example of where we are beginning to use machine learning, in that again, narrow space of solving for adverse events reporting in the pharma vigilant space, so we call that "pharmaco-vigilant-artificial-intelligence-solution." And it's a big problem that pharma companies are grappling with, and if you can find a way to solve for it, reduce cycle time and cost, that's a big win for humankind.
I would say machine learning is still in its early days in our use. There are some simple use cases, but the more complex ones are the ones I described, I think are a little behind. And again we are big believers that in the overall scheme of AI: computer vision, computational linguistics, and machine learning are three families under AI, and there are many more but these three are the ones that we think, in our kind of business, we are focused on to change the way work gets done and add value to companies and to human beings.
So, like you, I'm an optimist. I think AI is the power to make better decisions, and I don't know that that's ever a bad idea. I agree with you that, the so-called bottom billion are people who haven't had access to healthcare, and the example you gave that they have a smart phone and the power of AI is going to empower them to do all of that. I'm bullish on the creation of jobs and all the rest. But, I wonder if you'll take just a moment, and talk about any worries you have about the technology. Are you worried about its application in warfare, or its potential to… I don't know. What are some aspects of artificial intelligence that you think we need to be mindful of?
Many, many of them. So you picked on one that I know one of your podcasts referred to, and this applies to so many technologies. I mean if you look at nuclear fission, what a great technology from the perspective of producing power. But at the same time, how much the world has gone down the path of misusing it for warfare, nuclear warfare, and how much that is still a topic of conversation today. Same technology, two different directions.
I think the world of AI is going to be filled with such potential pitfalls, potential opportunities and incredible challenges of misuse. I think therefore governance becomes important. It's important for countries to come together. I suspect countries themselves could go down the path of misusing AI. Some countries have actually declared very proudly that they are going to own AI and therefore they're going to own power, which is interesting and I would suspect that on some level, probably true.
So, I think it's going to be incumbent upon thought leaders, academia, scientists, technologists, governments, sane rational governments, and voices such as Voices in AI, to actually bring that to the fore, to actually make it a real topic, to not brush it under the carpet because you could have warfare created by machines that decide that it is time to fight, and humans are not even involved. That's a world that's possible if you allow AI to get there, so it's, at some level, controlled, nuclear fission, is what produces great nuclear power. Similarly, controlled AI is what produces great value, I don't think uncontrolled AI is good for anyone in the world.
Well I think that's probably a good place to leave it, a lot of optimism but a word of caution as well. I want to thank you for being on the show, it was a fascinating hour, and I appreciate your time.
Byron thank you so much for having me on the show, really appreciate it. Wonderful conversation, and I will definitely be reading your book.
Thanks a whole lot.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.