Episode 50: A Conversation with Steve Pratt

In this episode, Byron and Steve discuss the present and future impact of AI on businesses.

-
-
0:00
0:00
0:00

Guest

Steve Pratt is the Chief Executive Officer at Noodle AI, the enterprise artificial intelligence company. Prior to Noodle, he was responsible for all Watson implementations worldwide, for IBM Global Business Services. He was also the founder and CEO of Infosys Consulting, a Senior Partner at Deloitte Consulting, and a Technology and Strategy Consultant at Booz Allen Hamilton. Consulting Magazine has twice selected him as one of the top 25 consultants in the world. He has a Bachelor’s and a Master’s in Electrical Engineering from Northwestern University and George Washington University.

Transcript

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I’m Byron Reese. Today, our guest is Steve Pratt. He is the Chief Executive Officer over at Noodle AI, the enterprise artificial intelligence company. Prior to Noodle, he was responsible for all Watson implementations worldwide, for IBM Global Business Services. He was also the founder and CEO of Infosys Consulting, a Senior Partner at Deloitte Consulting, and a Technology and Strategy Consultant at Booz Allen Hamilton. Consulting Magazine has twice selected him as one of the top 25 consultants in the world. He has a Bachelor’s and a Master’s in Electrical Engineering from Northwestern University and George Washington University. Welcome to the show, Steve.

Steve Pratt: Thank you. Great to be here, Byron.

Let’s start with the basics. What is artificial intelligence, and why is it artificial?

Artificial intelligence is basically any form of learning algorithm; is the way we think of things. We actually think there’s a raging religious debate [about] the differences between artificial intelligence and machine learning, and data science, and cognitive computing, and all of that. But we like to get down to basics, and basically say that they are algorithms that learn from data, and improve over time, and are probabilistic in nature. Basically, it’s anything that learns from data, and improves over time.

So, kind of by definition, the way that you’re thinking of it is it models the future, solely based on the past. Correct?

Yes. Generally, it models the future and sometimes makes recommendations, or it will sometimes just explain things more clearly. It typically uses four categories of data. There is both internal data and external data, and both structured and unstructured data. So, you can think of it kind of as a quadrant. We think the best AI algorithms incorporate all four datasets, because especially in the enterprise, where we’re focused, most of the business value is in the structured data. But usually unstructured data can add a lot of predictive capabilities, and a lot of signal, to come up with better predictions and recommendations.

How about the unstructured stuff? Talk about that for a minute. How close do you think we are? When do you think we’ll have real, true unstructured learning, that you can kind of just point at something and say, “I’m going to Barbados. You figure it all out, computer.”

I think we have versions of that right now. I am an anti-fan of things like chatbots. I think that chatbots are very, very difficult to do, technically. They don’t work very well. They’re generally very expensive to build. Humans just love to mess around with chatbots. I would say in the scoring of business value and something that’s affordable, and is easy to do, that chatbots is in the worst quadrant there.

I think there is a vast array of other things that actually add business value to companies, but if you want to build an intelligent agent using natural language processing, you can do some very basic things. But I wouldn’t start there.

Let me try my question slightly differently, then. Right now, the way we use machine learning is we say, “We have this problem that we want to solve. How do you do X?” And we have this data that we believe we can tease the answer out of. We ask the machine to analyze the data, and figure out how to do that. It seems the inherent limit of that, though, it’s kind of all sequential in nature. There’s no element of transferred learning in that, where I grow exponentially what I’m able to do. I just can do: “Yes. Another thing. Yes. Another. Yes. Another.” So, do you think this strict definition of machine learning, as you’re thinking of AI that way, is that a path to a general intelligence? Or is general intelligence like “No, that’s something way different than what we’re trying to do. We’re just trying to drive a car, without hitting somebody?”

General intelligence, I think, is way off in the future. I think we’re going to have to come up with some tremendous breakthroughs to get there. I think you can duct-tape together a lot of narrow intelligence, and sort of approximate general intelligence, but there are some fundamental skills that computers just can’t do right now.For instance, if I give a human the question, “Will the guinea pig population in Peru be relevant to predicting demand for tires in the U.S?” A human would say, “No, that’s silly. Of course not.” A computer would not know that. A computer would actually have to go through all of the calculations, and we don’t have an answer to that question, yet. So, I think generalized intelligence is a way off, but I think there are some tremendously exciting things that are happening right now, that are making the world a better place, in narrow intelligence.

Absolutely. I do want to spend the bulk of our time in there, in that world. But just to explore what you were saying, because there’s a lot of stuff to mine, in what you just said. That example you gave about the guinea pigs is sort of a common-sense problem, right? In how it’s referred. “Am I heavier than the statue of liberty?” How do you think humans are so good at that stuff? How is it that if I said, “Hey, what would an Oscar statue look like, smeared with peanut butter?” You can conjure that up, even though you’ve never even thought of that before, or seen it covered, or seen anything covered with peanut butter. Why are we so good at that kind of stuff, and machines seem amazingly ill-equipped at it?

I think humans have constant access to an incredibly diverse array of datasets. Through time, they have figured out patterns from all of those diverse datasets. So, we are constantly absorbing new datasets. In machines, it’s a very deliberate and narrow process right now. When you’re growing up, you’re just seeing all kinds of things. And as we go through our life, we develop these – you could think of them as regressions and classifications in our brains, for those vast arrays of datasets.

As of right now, machine learning and AI are given very specific datasets, crunch the data, and then make a conclusion. So, it’s somewhere in there. We’re not exactly sure, yet.

All right, last question on general intelligence, and we’ll come back to the here and now. When I ask people about it, the range of answers I get is 5 to 500 years. I won’t pin you down to a time, but it sounds like you’re “Yeah, it’s way off.” Yet, people who say that often usually say, “We don’t know how to do it, and it’s going to be a long time before we get it.”

But there’s always the implicit confidence that we can do it, that it is a possible thing. We don’t know how to do it. We don’t know how we’re intelligent. We don’t know the mechanism by which we are conscious, or the mechanism by which we have a mind, or how the brain fundamentally functions, and all of that. But we have a basic belief that it’s all mechanistic, so we’re going to eventually be able to build it. Do you believe that, or is it possible that a general intelligence is impossible?

No. I don’t think it’s impossible, but we just don’t know how to do it, yet. I think transfer learning, there’s a clue in there, somewhere. I think you’re going to need a lot more memory, and a lot more processing power, to have a lot more datasets in general intelligence. But I think it’s way off. I think there will be stage gates, and there will be clues of when it’s starting to happen. That’s when you can take an algorithm that’s trained for one thing, and have it – if you can take Alpha Go, and then the next day, it’s pretty good at Chess. And the next day, it’s really good at Parcheesi, and the next day, it’s really good at solving mazes, then we’re on the track. But that’s a long way off.

Let’s talk about this narrow AI world. Let’s specifically talk about the enterprise. Somebody listening today is at, let’s say a company of 200 people, and they do something. They make something, they ship it, they have an accounting department, and all of that. Should they be thinking about artificial intelligence now? And if so, how? How should they think about applying it to their business?

A company that small, it’s actually really tough, because artificial intelligence really comes into play when it’s beyond the complexity that a human can fit in their mind.

Okay. Let’s up it to 20,000 people.

20,000? Okay, perfect. 20,000 people – there are many, many places in the organization where they absolutely should be using learning algorithms to improve their decision-making. Specifically, we have 5 applications that focus on the supply side of the company; that’s in: materials, production, distribution, logistics and inventory.

And then, on the supply side, we have 5 areas also: customer, product, price, promotion and sales force. All of those things are incredibly complex, and they are highly interactive. Within each application area, we basically have applications that almost treat it like a game, although it’s much more complicated than a game, even though games like Go are very complex.

Each of our applications does, really, 4 things: it senses, it proposes, it predicts, and then it scores. So, basically it senses the current environment, it proposes a set of actions that you could take, it predicts the outcome of each of those actions – like the moves on a Chessboard – and then it scores it. It says, “Did it improve?” There are two levels of that, two levels of sophistication. One is “Did it improve locally? Did it improve your production environment, or your logistics environment, or your materials environment?” And then, there is one that is more complex, which says “If you look at that across the enterprise, did it improve across the enterprise?” These are very, very complex mathematical challenges. The difference is dramatic, from the way decisions are made today, which is basically people getting in meetings with imperfect data on spreadsheets and PowerPoint slides, and having arguments.

So, pick a department, and just walk me through a hypothetical or real use case where you have seen the technology applied, and have measurable results.

Sure. I can take the work we’re doing at XOJET, which is the largest private aviation company in the U.S. If you want to charter a jet, XOJET is the leading company to do that. The way they were doing pricing before we got there was basically old, static rules that they had developed several years earlier. That’s how they were doing pricing. What we did is we worked with them to take into account where all of their jets currently were, where all of their competitors’ jets are, what the demand was going to be, based on a lot of internal and external data; like what events were happening in what locations, what was the weather forecast, what [were] the economic conditions, what were historic prices and results? And then, basically came up with all of the different pricing options they could come up with, and then basically made a recommendation on what the price should be. As soon as they put in our application, which was in Q4 of 2016, the EBITDA of the company, which is basically the net margin – not quite, but – went up 5%, in the company.

The next thing we did for them was to develop an application that looked at the balance in their fleet, which is: “Do you have the right jets in the right place, at the right time?” This takes into account having to look at the next day. Where is the demand going to be the next day? So, you make sure you don’t have too many jets in low demand locations, or not enough jets in high demand locations. We actually adjusted the prices, to create an economic incentive to drive the jets to the right place at the right time.

We also, again, looked at competitive position, which is through Federal Aviation Administration data. You can track the tail numbers of all of their jets, and all of the competitor jets, so you could calculate competitive position. Then, based on that algorithm, the length of haul, which is the amount of hours flown per jet, went up 11%.

This was really dramatic, and dramatically reduced the number of “deadheads” they were flying, which is the amount of empty jets they were flying to reposition their jets. I think that’s a great success story. There’s tremendous leadership at that company, very innovative, and I think that’s really transformed their business.

That’s kind of a classic load-balancing problem, right? I’ve got all of these things, and I want to kind of distribute it, and make sure I have plenty of what I need, where. That sounds like a pretty general problem. You could apply it to package delivery or taxicab distribution, or any number of other things. How generalizable is any given solution, like from that, to other industries?

That’s a great question. There are a lot of components in that, that are generalizable. In fact, we’ve done that. We have componentized the code and the thinking, and can rapidly reproduce applications for another client, based on that. There’s a lot of stuff that’s very specific to the client, and of course, the end application is trained on the client’s data. So, it’s not applicable to anybody else. The models are specifically trained on the client data. We’re doing other projects in airline pricing, but the end result is very different, because the circumstances are different.

But you hit on a key question, which is “Are things generalizable?” One of the other approaches we’re taking is around transferred learning, especially when you’re using deep learning technologies. You can think of it as the top layers of a neural net can be trained on sort of general pricing techniques, and just the deeper layers are trained on pricing specific to that company.

That’s one of the other generalization techniques. Because AI problems in the enterprise generally have sparser datasets than if you’re trying to separate cat pictures from dog pictures. So, data sparcity is a constant challenge. I think transfer learning is one of the key strategies to avoid that.

You mentioned in passing, looking at things like games. I’ve often thought that was kind of a good litmus test for figuring out where to apply the technology, because games have points, and they have winners, and they have turns, and they have losers. They have structure to them. If that case study you just gave us was a game, what was the point in that? Was it a dollar of profit? Because you were like “Well, the plane could be, or it could fly here, where it might have a better chance to get somebody. But that’s got this cost. It wears out the plane, so the plane has to be depreciated accordingly.” What is the game it’s playing? How do you win the game it’s playing?

That’s a really great question. For XOJET, we actually created a tree of metrics, but at the top of the tree is something called fleet contribution, which is “What’s the profit generated per period of time, for the entire fleet?” Then, you can decompose that down to how many jets are flying, the length of haul, and the yield, which is the amount of dollars per hour flown. There’s also, obviously, a customer relationship component to it. You want to make sure that you get really good customers, and that you can serve them well. But there are very big differences between games and real-life business. Games have a finite number of moves. The rules are well-defined. There’s generally, if you look at Deep Blue or Alpha Go, or Arthur Samuels, or even the Labradas. All of these were two-player games. In the enterprise, you have typically tens, sometimes hundreds of players in the game, with undefined sets of moves. So, in the one sense, it’s a lot more complicated. The idea is, how do you reduce it, so it is game-like? That’s a very good question.

So, do you find that most people come to you with a defined business problem, and they’re not really even thinking about “I want some of this AI stuff. I just want my planes to be where they need to be.” What does that look like in the organization that brings people to you, or brings people to considering an artificial intelligence solution to a problem?

Typically, clients will see our success in one area, and then want to talk to us. For instance, we have a really great relationship with a steel company in Arkansas, called Big River Steel. Big River Steel, we’re building the world’s first learning steel mill with them. Which will learn from their sensors, and be able to just do all kinds of predictions and recommendations. It goes through that sense, propose, predict and score. It goes through that. So, when people heard that story, we got a lot of calls from steel mills. Now, we’re kind of deluged with calls from steel mills all over the world, saying, “How did you do that, and how do we get some of it?”

Typically, people hear about us because of AI. We’re a product company, with applications, so we generally don’t go in from a consulting point of view, and say “Hey, what’s your business problem?” We will generally go in and say, “Here are the ten areas where we have expertise and technology to improve business operations,” and then we’ll qualify a company, if it applies or not. One other thing is that AI follows the scientific methods, so it’s all about hypothesis, test, hypothesis, test. So it is possible that an AI application that works for one company will not work for another company. Sometimes, it’s the datasets. Sometimes, it’s just a different circumstance. So, I would encourage companies to be launching lots of hypotheses, using AI.

Your website has a statement quite prominently, “AI is not magic. It’s data.” While I wouldn’t dispute it, I’m curious. What were you hearing from people that caused you to… or maybe hypothetically, – you may not have been in on it – but what do you think is the source of that statement?

I think there’s a tremendous amount of hype and B.S. right now out there about AI. People anthropomorphize AI. You see robots with scary eyes, or you see crystal balls, or you see things that – it’s all magic. So, we’re trying to be explainers in chief, and to kind of de-mystify this, and basically say it’s just data and math, and supercomputers, and business expertise. It’s all of those four things, coming together.

We just happen to be at the right place in history, where there are breakthroughs in those areas. If you look at computing power, I would single that out as the thing that’s made a huge difference. In April of last year, NVIDIA released the DGX-1, which is their AI supercomputer. We have one of those in our data center, that in our platform we affectionately call “the beast,” which has a petaflop of computing power.

If you put that into perspective, that the fastest supercomputer in the world in the year 2000, was the ASCI Red, that had one teraflop of computing power. There was only one in the world, and no company in the world had access to that.

Now, with the supercomputing that’s out there, the beast has 1,000 times more computing power than the ASCI Red did. So, I think that’s a tremendous breakthrough. It’s not magic. It’s just good technology. The math behind artificial intelligence still relies largely on mathematical breakthroughs that happened in the ‘50s and ‘60s. And of course, Thomas Bayes, with Bayes’ Theorem, who was a philosopher in the 1700s.

There’s been a lot of good work recently around different variations on neural nets. We’re particularly interested in long- and short-term memory, and convolutional neural nets. But a lot of this is, a lot of the math has been around for a while. In fact, it’s why I don’t think we’re going to hit general intelligence any time soon. Because it is true that we have had exponential growth in computing power, and exponential growth in data. But it’s been a very linear growth in mathematics, right? If we start seeing AI algorithms coming up with breakthroughs in mathematics, that we simply don’t understand, then I think the antennas can go up.

So, if you have your DGX-1, at a petaflop, and in five years, you get something that’s an exaflop – it’s 1,000 times faster than that – could you actually put that to use? Or is it at some point, the jet company only has so much data. There are only so many different ways to crunch it. We don’t really need more – we have, at the moment, all of the processor power we need. Is that the case? Or would you still pay dearly to get a massively faster machine?

We could always use more computing power. Even with the DGX-1. For instance, we’re working with a distribution company where we’re generating 500,000 models a day for them, crunching on massive amounts of data. If you have massive datasets for your processing, it takes a while. I can tell you, life is a lot better. I mean, in the ‘90s, we were working on a neural net for the Coast Guard; to try to determine which ships off of the west coast were bad guys. It was very simple neural nets. You would hit return, and it would usually crash. It would run for days and days and days and days, be very, very expensive, and it just didn’t work.

Even if it came up with an answer, the ships were already gone. So, we could always use more computing power. I think right now, a limitation is more on the data side of it, and related to the fact that they shouldn’t be throwing out data that they’re throwing out. For instance, like customer relationship management systems. Typically, when you have an update to a customer, that it overwrites the old data. That is really, really important data. I think coming up with a proper data strategy, and understanding the value of data, is really, really important.

What do you think, on this theme of AI is not magic, it’s data; when you go into an organization, and you’re discussing their business problems with them, what do you think are some of the misconceptions you hear about AI, in general? You said it’s overhyped, and glowing-eyed robots and all of that. From an enterprise standpoint, what is it that you think people are often getting wrong?

I think there’s a couple of fundamental things that people are getting wrong. One is I think there is a tremendous over-reliance and over-focus on unstructured data, that people are falling in love with natural language processing, and thinking that that’s artificial intelligence. While it is true that NLP can help with judging things like consumer sentiment or customer feedback, or trend analysis on social media, generally those are pretty weak signals. I would say, don’t follow the shiny object. I think the reason people see that, is the success of Siri and Alexa, and people see that as AI. It is true that those are learning algorithms, and those are effective in certain circumstances.

I think they’re much less effective when you start getting into dialogue. Doing dialogue management with humans is extraordinarily difficult. Training the corpus of those systems is very, very difficult. So, I would say stay away from chatbots, and focus mostly on structured data, rather than unstructured data. I think that’s a really big one. I also think that focusing on the supply side of a company is actually a much more fruitful area than focusing on the demand side, other than sales forecasting. The reason I say that is that the interactions between inbound materials and production, and distribution, are more easily modeled and can actually make a much bigger difference. It’s much harder to model things like the effect of a promotion on demand, although it’s possible to do a lot better than they’re doing now. Or, things like customer loyalty; like the effect of general advertising on customer loyalty. I think those are probably two of the big areas.

When you see large companies being kind of serious about machine learning initiatives, how are they structuring those in the organization? Is there an AI department, or is it in IT? Who kind of “owns” it? How are its resources allocated? Are there a set of best practices, that you’ve gleaned from it?

Yes. I would say there are different levels of maturity. Obviously, the vast majority of companies have no organization around this, and it is individuals taking initiatives, and experimenting by themselves. IT in general has not taken a leadership role in this area. I think, fundamentally, that’s because IT departments are poorly designed. Like the CIO job needs to be two jobs. There needs to be a Chief Infrastructure Officer and Chief Innovation Officer. One of those jobs is to make sure that the networks are working, the data center is working, and people have computers. The other job is, “How are advances in technologies helping companies?” There are some companies that have Chief Data Officers. I think that’s also caused a problem, because they’re focusing more on big data, and less on what do you actually do with those data?

I think the most advanced companies – I would say, first of all, it’s interesting, because it’s following the same trajectory as information technology organizations follow, in companies. First, it’s kind of anarchy. Then, there’s the centralized group. Then, it goes to a distributed group. Then, it goes to a federated group, federated meaning there’s a central authority which basically sets standards and direction. But each individual business unit has their representatives. So, I think we’re going to go through a whole bunch of gyrations in companies, until we end up where most technology organizations are today, which is; there is a centralized IT function, but each business unit also has IT people in it. I think that’s where we’re going.

And then, the last question along these lines: Do you feel that either: A) machine learning is doing such remarkable things, and it’s only going to gain speed, and grow from here, or B) machine learning is over-hyped to a degree that there are unrealistic expectations, and when disappointment sets in, you’re going to get a little mini AI winter again. Which one of those has more truth?

Certainly, there is a lot of hype about it. But I think if you look at the reality of how many companies have actually implemented learning algorithms; AI, ML, data science, across the operations of their company, we’re at the very, very beginning. If you look at it as a sigmoid, or an s-curve, we’re just approaching the first inflection point. I don’t know of any company that has fully deployed AI across all parts of their operations. I think ultimately, executives in the 21stcentury will have many, many learning algorithms to support them, making complex business decisions.

I think the company that clearly has exhibited the strongest commitment to this, and is furthest along, is Amazon. If you wonder how Amazon can deliver something to your door in one hour, it’s because there are probably 100 learning algorithms that made that happen, like where should the distribution center be? What should be in the distribution center? Which customers are likely to order what? How many drivers do we need? What’s the route the driver should take? All of those things are powered by learning algorithms. And you see the difference, you feel the difference, in a company that has deployed learning algorithms. I also think if you look back, from a societal point of view, that if we’re going to have ten billion people on the planet, we had better get a lot more efficient at the consumption of natural resources. We had better get a lot more efficient at production.

I think that means moving away from static business rules that were written years ago, that are only marginally relevant to learning algorithms that are constantly optimizing. And then, we’ll have a chance to get rid of what Hackett Group says is an extra trillion dollars of working capital, basically inventory, sitting in companies. And we’ll be able to serve customers better.

You seem like a measured person, not prone to wild exaggeration. So, let me run a question by you. If you had asked people in 1995, if you had said this, “Hey, you know what? If you take a bunch of computers, just PCs, like everybody has, and you connected them together, and you got them to communicate with hypertext protocol of some kind, that’s going to create trillions and trillions and trillions and trillions and trillions of dollars of wealth.” “It’s going to create Amazon and Google and Uber and eBay and Etsy and Baidu and Alibaba, and millions of jobs that nobody could have ever imagined. And thousands of companies. All of that, just because we’re snapping together a bunch of computers in a way that lets them talk to each other.” That would have seemed preposterous. So, I ask you the question; is artificial intelligence, even in the form that you believe is very real, and what you were just talking about, is it an order of magnitude bigger than that? Or is it that big, again? Or is it like “Oh, no. Just snapping together, a bunch of computers, pales to what we are about to do.” How would you put your anticipated return on this technology, compared to the asymmetrical impact that this seemingly very simple thing had on the world?

I don’t know. It’s really hard to say. I know it’s going to be huge. Right? It is fundamentally going to make companies much more efficient. It’s going to allow them to serve their customers better. It’s going to help them develop better products. It’s going to feel a lot like Amazon, today, is going to be the baseline of tomorrow. And there’s going to be a lot of companies that – I mean, we run into a lot of companies right now that just simply resist it. They’re going to go away. The shareholders will not tolerate companies that are not performing up to competitive standards.

The competitive standards are going to accelerate dramatically, so you’re going to have companies that can do more with less, and it’s going to fundamentally transform business. You’ll be able to anticipate customer needs. You’ll be able to say, “Where should the products be? What kind of products should they be? What’s the right product for the right customer? What’s the right price? What’s the right inventory level? How do we make sure that we don’t have warehouses full of billions and billions of dollars worth of inventory?”

It’s very exciting. I think the business, and I’m generally really bad at guessing years, but I know it’s happening now, and I know we’re at the beginning. I know it’s accelerating. If you forced me to guess, I would say, “10 years from now, Amazon of today will be the baseline.” It might even be shorter than that. If you’re not deploying hundreds of algorithms across your company, that are constantly optimizing your operations, then you’re going to be trailing behind everybody, and you might be out of business.

And yet my hypothetical 200-person company shouldn’t do anything today. When is the technology going to be accessible enough that it’s sort of in everything? It’s in their copier, and it’s in their routing software. When is it going to filter down, so that it really permeates kind of everything in business?

The 200-person company will use AI, but it will be in things like, I think database design will change fundamentally. There is some exciting research right now, actually using predictive algorithms to fundamentally redesign database structures, so that you’re not actually searching the entire database; you’re just searching most likely things first. Companies will use AI-enabled databases, they’ll use AI in navigation, they’ll use AI in route optimization. They’ll do things like that. But when it comes down to it, for it to be a good candidate for AI, in helping make complex decisions, the answer needs to be non-obvious. Generally with a 200-person company, having run a company that went from 2 people to 20 people, to 200 people, to 2,000 people, to 20,000 people, I’ve seen all of the stages.

A 200-person company, you can kind of brute force. You know everybody. You’ve just crossed Dunbar’s number, so you kind of know everything that’s going on, and you have a good feel for things. But like you said, I think applying it in using other peoples’ technologies that are driven by AI, for the things that I talked about, will probably apply to a 200-person company.

With your jet company, you did a project, and EBITDA went up 5%, and that was a big win. That was just one business problem you were working on. You weren’t working on where they buy jet fuel, or where they print. Nothing like that. So presumably, over the long haul, the technology could be applied in that organization, in a number of different ways. If we have a $70 trillion economy in the world, what percent is – 5% is easy – what percentage improvement do you think we’re looking at? Like just growing that economy dramatically, just by the efficiencies that machine learning can provide?

Wow. The way to do that is to look at an individual company, and then sort of extrapolate. I would say an individual company could, if you look at the value of companies. That’s the way I look at it, like shareholder value, which is made up of revenue, margins and capital efficiency. I think that revenue growth could take off, could probably double, from what it is. The growth could double from what it is now. Margins, it will have a dramatic impact. I think you could, if you look at all of the different things you could do within the company, and you had fully deployed learning algorithms, and gotten away from making decisions on yardsticks and averages, you could, a typical company, I’ll say double your margins.

But the home run is in capital efficiency, which not too many people pay attention to, and is one of the key drivers of return on invested capital, which is the driver of general value. This is where you can reduce things 30%, things like that, and get rid of warehouses of stuff. That allows you to be a lot more innovative, because then you don’t have obsolescence. You don’t have to push products that don’t work. You can develop more innovative products. There are a lot of good benefits. Then, you start compounding that year over year, and pretty soon, you’ve made a big difference.

Right, because doubling margins alone doubles the value of all of the companies, right?

It would, if you projected it out over time. Yes. All else being equal.

Which it seldom is. It’s funny, you mentioned Amazon earlier. I just assumed they had a truck with a bunch of stuff on it, that kept circling my house, because it’s like every time I want something, they’re just there, knocking on the door. I thought it was just me!

Yeah. Amazon Prime now came out, was it last year? In the Bay Area? My daughter ordered a pint of ice cream and a tiara. An hour later, a guy is standing at the front door with a pint of ice cream, and a tiara. It’s like Wow!

What a brave new world, that has such wonders in it!

Exactly!

As we’re closing up on time here, there are a number of people that are concerned about this technology. Not in the killer robot scenario. They’re concerned about automation; they’re concerned about – you know it all. Would you say that all of this technology and all of this growth, and all of that, is good for workers and jobs? Or it’s bad, or it’s disruptive in the short term, not in the long term? How do you size that up for somebody who is concerned about their job?

First of all, moving sort of big picture to small picture, first of all, this is necessary for society, unless we stop having babies. We need to do this, because we have finite resources, and we need to figure out how to do more with less. I think the impact on jobs will be profound. I think it will make a lot of jobs a lot better. In AI, we say it’s augment, amplify and automate. Right now, like the things we’re doing at XOJET really help make the people in revenue management a lot more powerful, and I think, enjoy their jobs a lot more, and doing a lot less routine research and grunt work. So, they actually become more powerful, it’s like they have super powers.

I think that there will also be a lot of automation. There are some tasks that AI will just automate, and just do, without human interaction. A lot of decisions, in fact most decisions, are better if they’re made with an algorithm anda human, to bring out the best of both. I do think there’s going to be a lot of dislocation. I think it’s going to be very similar to what happened in the automotive industry, and you’re going to have pockets of dislocation that are going to cause issues. Obviously, the one that’s talked about the most is the driverless car. If you look at all of the truck drivers, I think probably within a decade, that most cross-country trucks, there’s going to be some person sitting in their house, in their pajamas, with nine screens in front of them, and they’re going to be driving nine trucks simultaneously, just monitoring them. And that’s the number one job of adult males in the U.S. So, we’re going to have a lot of displacement. I think we need to take that very seriously, and get ahead of it, as opposed to chasing it, this time. But I think overall, this is also going to create a lot more jobs, because it’s going to make more successful companies. Successful companies hire people and expand, and I think there are going to be better jobs.

You’re saying it all eventually comes out in the wash; that we’re going to have more, better jobs, and a bigger economy, and that’s broadly good for everyone. But there are going to bumps in the road, along the way. Is that what I’m getting from you?

Yes. I think it will actually be a net positive. I think it will be a net significant positive. But it is a little bit of, as economists would say, “creative destruction.” As you go from agricultural to industrial, to knowledge workers, toward sort of an analytics-driven economy, there are always massive disruptions. I think one of the things that we really need to focus on is education, and also on trade schools. There is going to be a lot larger need for plumbers and carpenters and those kinds of things. Also, if I were to recommend what someone should study in school, I would say study mathematics. That’s going to be the core of the breakthroughs, in the future.

That’s interesting. Mark Cuban was asked that question, also. He says the first trillionaires are going to be in AI.  And he said philosophy. Because in the end, what you’re going to need are what the people know how to do. Only people can impute value, and only people can do all of that.

Wow! I would also say behavioral economics; understanding what humans are good at doing, and what humans are not good at doing. We’re big fans of Kahneman and Tversky, and more recently, Thaler. When it comes down to how humans make decisions, and understanding what skills humans have, and what skills algorithms have, it’s very important to understand that, and to optimize that over time.

All right. That sounds like a good place to leave it. I want to thank you so much for a wide-ranging show, with a lot of practical stuff, and a lot of excitement about the future. Thanks for being on the show.

My pleasure. I enjoyed it. Thanks, Byron.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.