Episode 71: A Conversation with Paul Daugherty

In this episode Byron and Paul talk about AI, transfer learning, consciousness and Paul's book "Human + Machine: Reimagining Work in the Age of AI."

:: ::

Guest

Paul Daugherty holds a degree in computer engineering from the University of Michigan, and is currently the Chief Technology and Innovation Officer at Accenture.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm. Today my guest is Paul Daugherty. He is the Chief Technology and Innovation Officer at Accenture. He holds a computer engineering degree from the University of Michigan. Welcome to the show Paul.

Paul Daugherty: It's great to be here, Byron.

Looking at your dates on LinkedIn, it looks like you went to work for Accenture right out of college and that was a quarter of a century or more ago. Having seen the company grow… What has that journey been like?

Thanks for dating me. Yeah it's actually been 32 years, so I guess I'm going on a third of a century, joined Accenture back in 1986, and the company's evolved in many ways since then. It's been an amazing journey because the world has changed so much since then and a lot of what's fueled the change in the world around us has been what's happened with technology. I think [in] 1986 the PC was brand new, and we went from that to networking and client server and the Internet, cloud computing mobility, internet of things, artificial intelligence and the things we're working on today. So it's been a really amazing journey fueled by the way the world's changed, enabled by all this amazing technology.

So let's talk about that, specifically artificial intelligence. I always like to get our bearings by asking you to define either artificial intelligence or if you're really feeling bold, define intelligence.

I'll start with artificial intelligence which we define as technology that can sense, think, act and learn, is the way we describe it. And [it’s] systems that can then do that, so sense: like vision in a self-driving car, think: making decisions on what the car does next, acts: in terms of they actually steer the car and then learn: to continuously improve behavior. So that's the working definition that we use for artificial intelligence, and I describe it more simply to people sometimes, as fundamentally technology that has more human-like capability to approximate the things that we're used to assuming and thinking that only humans can do: speech, vision, predictive capability and some things like that.

So that's the way I define artificial intelligence. Intelligence I would define differently. Intelligence I would just define more broadly. I'm not an expert in neuroscience or cognitive science or anything, but I define intelligence generally as the ability to both reason and comprehend and then extrapolate and generalize across many different domains of knowledge. And that's what differentiates human intelligence from artificial intelligence, which is something we can get a lot more into. Because I think the fact that we call this body of work that we're doing artificial intelligence, both the word artificial and the word intelligence I think lead to misleading perceptions on what we're really doing.

So, expand that a little bit. You said that's the way you think human intelligence is different than artificial, -- put a little flesh on those bones, in exactly what way do you think it is?

Well, you know the techniques we're really using today for artificial intelligence, they’re generally from the branch of AI around machine learning, so machine learning, deep learning, neural nets etc. And it's a technology that's very good at using patterns and recognizing patterns in data to learn from observed behavior, so to speak. Not necessarily intelligence in a broad sense, it's ability to learn from specific inputs. And you can think about that almost as idiot savant-like capability.

So yes, I can use that to develop Alpha Go to beat the world’s Go master, but then that same program wouldn't know how to generalize and play me in tic-tac-toe. And that ability, the intelligence ability to generalize, extrapolate, rather than interpolate, is what human intelligence is differentiated by, and the thing that would bridge that, would be artificial general intelligence, which we can get into a little bit, but we're not at that point of having artificial general intelligence, we're at a point of artificial intelligence, where it could mimic very specific, very specialised, very narrow human capabilities, but it's not yet anywhere close to human-level intelligence.

Do you think that the narrow AI we have is on an evolutionary pathway to an AGI or is an AGI like, “Oh no no we haven't started building that, that's going to be something different, you can't just study a bunch of data about the past, and make predictions about the future and have that somehow have an emergent general intelligence come out of that”? So do you think we're 1% of the way on a path, or do you think that's a different path, to get to AGI?

It's a non linear and I think a very different path to get to artificial general intelligence. One of things you have to deal with here is that there really is no generally accepted definition even of artificial general intelligence. Some would say it's just the ability to do some simple cross-domain, knowledge or understanding or it's ability to do more generalized broad domain search, and they'd say that's artificial general intelligence. I would say that's not, it’s just better search technology or still narrow forms of AI.

And the techniques we have today aren't the ones that are going to evolve and give us true, what I would define as AGI, which is real cross-domain human-like ability to contextualize, generalize, extrapolate etc. So we need breakthroughs in a lot of new and different domains and technologies to get us to AGI breakthroughs. It's not going to be all machine learning-based technology. We need contextual reasoning, symbolic reasoning, other types of knowledge representation technologies beyond the learning-based technologies we have today, to truly get us to that AGI, which are other breakthroughs that are still ahead of us, and why I believe we're quite a distance away from true AGI capability, if defined in that way.

So, when you talk about extrapolate, and generalize and all of that, broadly speaking you're talking about transferred learning, correct?

Yeah, transferred learning but in a broad and expansive sense, rather than transfer learning as a technique of machine learning to map narrowly from domain to domain.

Right, so I guess you can train a child [by] showing them five drawings of a cat and then they'll recognize cats and then they'll even say, when they see a Manx, they'll say, “Oh that's a cat without a tail.” Even though they hadn't ever been taught that there was a cat without a tail. How do you think it is that we're so good at that, and machines are so bad at that? What do you think we're doing that it all comes effortlessly to us? It's just how we're wired, but we can't seem to replicate that in a machine, like you said, everything we do is incredibly narrow.

Yeah, it's a great question, and it's an area that's of great interest to me, but I would say I don't have the answer to that question. Cognitive science and neuroscience have always been interest areas of mine, but I'm not the deepest expert in those areas. The thing that got me hooked on AI though many years ago, back in college, [in] 1984 I was in college at the University of Michigan, I took a course with Douglas Hofstadter, who you may know of, who wrote the book Gödel, Escher, Bach: An Eternal Golden Braid, and he was an early pioneer in cognitive science and that got me hooked on this whole idea of how the brain works and how you can start to do some things leveraging the way the brain works in software, but fundamentally we don't know enough about how some of these things work.

And we certainly are far from being able to embody that in software that we can then generalize, so I think we're still a ways off from that. I think the interesting development going forward is going to be how advances in neuroscience in areas like—let's take an area like cognitive neural prosthetics, so enhancing the brain's capability. How does that whole branch compare and maybe compete with artificial intelligence in the form of silicon based techniques? As we improve our biological capability with our brain, what's also happening with artificial intelligence and techniques that lead us to AGI, it's going to be interesting to see both continue to evolve, because we're going to continue to advance our own human capability very rapidly, in addition to what we're doing in silicon and computing with artificial intelligence.

And the other thing we seem to do a really good job at is pattern matching, like you can lay on a hillside with somebody and look up at a cloud and say, 'oh look there's a horse.' And they say, 'oh yeah I see that.' Even though it's the vaguest of horses, right? So we seem to have very fluid pattern matching and yet it seems that our machine instantiations of that are very rigid. Do you think we're on an evolutionary path to get past that, or do we need a fundamental breakthrough there?

From what I see, I think the pattern matching will continue to evolve very rapidly and already in machine learning, deep learning techniques are better at many types of pattern matching than people are. They can go across a broader data expanse that's incomprehensible and hard for us to interpret as humans, and develop the patterns in that.

For example, work we're doing in anti-money laundering in banks where you can take petabytes of information and transactions and run algorithms and quickly understand some correlations and some patterns that may represent bad behavior, criminal behavior in the transactions. [It] would be really really difficult for a human to comprehend and understand those patterns, but very possible for machines to be used to do that, and algorithms to be used to do that. So I think we're already at a point where in many ways, the pattern matching of machines is superior to what we can do as humans.

So one last philosophical level question: Do you think that human consciousness, our ability [to] actually experience the world—we feel warmth, we don't measure temperature—we have a first person experience of the world, do you think that is a key element of our generalized intelligence? Or do you think that we'll be able to, maybe in 100 years, build an AGI, but it won't necessarily need consciousness to be as smart and versatile as we are?

Yeah, so it's a fascinating question, something I'm doing some work on myself, and we're doing some work in our research labs around this too, to just understand the dimensions of even what that question is. But I think in the short term in the foreseeable future, looking out 10-20 years and beyond, I don't see a vein of research that's leading us to an understanding of consciousness and the ability to understand the basic elements of consciousness. I think that is something that we're still trying to understand from a neuroscience perspective, much less model precisely from a software perspective, so I think we're quite a ways off from having anything that approximates any form of consciousness in a silicon-based algorithmic form.

Well you know it has been called the last great scientific question that we know neither how to ask scientifically, nor what the answer will look like. I just had a book come out on the topic last month, about whether machines can become conscious and I distill down 8 different kind of schools of thought about the origins of consciousness, and try to speculate whether a machine could… whether that could be represented in a machine, but, it's just... it's either humbling or embarrassing, I don't know which, that we understand our most basic experience of the world so poorly.

I think it's actually exciting that we don't understand it. So I'm maybe just flipping the lens a little bit, if that is what makes life dynamic and interesting and very experiential is the way we understand and interact and interpret the world. I think consciousness is kind of at the heart of what makes us human and the fact it's a bit of a mystery, I think, is okay and not really a surprise. But it's going to be interesting to watch the science of this move rapidly and see if we can distill some of the components of what makes up consciousness down to something we can do more algorithmically, but again I think we're a ways away from that truly happening.

You had a book come out recently, which I'm holding here in my hand, called Human + Machine: Reimagining Work in the Age of AI. Can you talk about what question were you trying to answer with this book? Why did you write it?

I think it's a good companion piece to your book perhaps, because the reason we wrote the book is we thought that when we stepped back two years ago, we were doing a lot of work at that point with artificial intelligence already, and had a lot of research and real activity going on with it, but we didn't think there was a good book out there that gave guidance to business executives, business managers, people in business and IT technology types of professions, in terms of what to do with AI, how do you apply it to your business, what's the roadmap to getting business value out of it, what's the way to do it right to build a foundation to really improve your business, create growth, profitability, competitiveness [or] whatever your business wants, but do it in a way that avoids what could be some of the unique downsides or risks that can come about with artificial intelligence.

So that's why we wrote Human + Machine, and the other key part of Human + Machine that we quickly identified through the research and activity that we did is the plus sign in it. We quickly concluded that it's really about the humanplus the machine and how we bring individual capability together with AI software technology, to create new ways of working, and that ‘human + machine’ is really the central idea that carries through the book.

Isn't that what Kasparov concluded about chess, that in the end it was going to be an awesome chess program, paired with an awesome player or players that would, in the end be the very best?

Yeah, we see that in profession after profession, that that's the case. We as part of the process of writing the book, we researched over 1500 organizations, we talked to thousands of executives as well as workers, at all different levels, low skilled professions to more highly skilled professions, looking at the way work was being done, what organizations were doing and the conclusion time and again is that the real interesting applications that were driving differentiated business results were those that were pairing the two in the way you just described that Kasparov concluded with chess.

An example from Harvard is the medical research they did on breast cancer tumor detection, where they concluded an algorithm—the algorithms were about 92% accurate—and human diagnosis, a human doctor at this point is about 96% accurate, but you can pair the two together and it's about 99.5% accurate, so the best results are the human plus machine working together making the decision. Again we see that replicated in many many examples and that's the heart of this human + machine perspective, and a lot of the central ideas of the book.

So, walk me through some of your findings. If I'm an enterprise today and I've heard about this AI thing, and I want to know how can I apply that technology to my business, what's the first question I ask, or what's the first thing I assess?

I think the first thing you assess that we start with, is where you apply it in the business, and that's an important decision that you make. What problem can it solve or what part of the business have you not been able to improve in the way you wanted to, or what's really critical to your differentiation, and then look at how you can take on that problem and come up with a different way of approaching it. We call that, 'reimagination,' and that's another word that's in the title of the book, 'reimagining work.' We believe this is fundamentally different than the prior generations of technology.

We've been automating and we've been reengineering business for a while, but that's about flow charts and static sequential processes and incremental improvements in the way we work. Reimagining is about how do you fundamentally create breakthrough new approaches you couldn't have created before. So an example is like with the life sciences industry, with pharmaceutical companies, we're looking at and working with deep learning models that are a different way of doing the science of identifying new therapeutic treatments.

So you can use deep learning to identify disease characteristics, match them to make up their compound characteristics, and better detect the patterns as we're talking about earlier, that lead to new treatments to disease. So that's finding drugs faster, using a very different approach injecting AI into what was previously more of a pure science-based approach, and fusing that together, and that's changing the R&D process in the way that's done in that industry, and those are the kind of results that we see making real differences. That's the first step that we talked about.

And you know I think that there's three completely different stratospheres of applying this technology. You have a very few very large companies that have an enormous amount of technical talent, that are able to do primary kinds of things in the field. And then you have a different sized enterprise, a smaller one, which is able to use the platforms that are out there, the half dozen or dozen platforms to create their own projects and again they've got coding horsepower and all of that. And then you have a smaller enterprise, which needs to wait until that technology's instantiated in some tool that they use, some piece of software or hardware. If I'm a 100 person company or a 500 person company, where in that should I be thinking?

Yeah, I think you really have to think about two things, which [are] the algorithms and the data. I think what we already see happening is that for the algorithms and the tools you use, a lot of that's being democratized and we think that'll continue to happen more and more as AI evolves. So, for example, you can go to a platform, take Google as an example, look at Google’s Vision API, very powerful capability that's available to anybody that knows a little bit of Python, to use [for] powerful data recognition, image recognition, through a simple API and develop powerful new capabilities for their business. That's an example of what we call the democratization of AI. We see that happening across many many different platforms and categories of application software.

What that means, to answer your question is, this isn't just going to be a tool for the big companies, small companies and even entrepreneurs starting companies are going to have, already have and increasingly will have very powerful tools available to solve their problems. And we see that type of innovation already, a lot of really interesting companies being formed, new startups that are using AI to disrupt individual industry segments or are bringing new products to market, who are leveraging the capability you get from the more powerful platforms.

The second thing though, is the data and that's what's going to become a real competitive differentiator for a lot of companies, given that with the current learning based techniques, data is really the fuel. Data is the fuel for the AI engines we’re developing, so access to the right amounts of data is going to be a real competitive differentiator for companies, and as we start, I think large companies, in many cases, have an advantage, because they have vast data sets to use to train their algorithms to do something.

For example, in a consumer based industry, a retail industry, tremendous information [is] available on their existing consumers buying patterns and trends and tendencies and such for their current systems, and they can use that to develop new AI-based products and services, better recommendations etc. But as time goes on, there's going to [be] more IoT out in the world, we'll have more common data sets available on some of the platforms. Those types of benefits make it democratized to smaller companies as well, so it'll evolve as we go on, but it's not as simple as saying the big companies are going to have the advantage going forward. We think there'll actually be a lot of creation and spread of entrepreneurial, small companies, new companies being formed, being able to leverage the powerful AI capabilities that exist.

So continuing through the thesis of your book, after that first step, what do you suggest companies do next, what comes after that initial assessment?

A big thing that you need to do is look at your workforce, and preparing your workforce. We think this is a real critical issue. We devote a number of chapters in the book just to preparing people in your workforce for AI, and there's two parts of it that we talk about. One is preparing the people who are going to do AI, so you need to recruit the right people, you need to think about the new jobs and skills that you need to develop AI within your company whether you're a large or small organization.

And then there's a bigger category of people… how do you prepare all the people in your organization who need to use AI in different ways, the people whose jobs are going to change as a result of AI? That's a big issue, that's the majority of roles in most companies. So in our book, we're talking about both of those.

But I think the harder thing for many organizations is that latter piece which is, how do I look at changing the role of my customer success or customer care organization, my R&D organization, my back office finance organization etc., how do I change those roles to take advantage of the new AI capabilities that we have? And to answer that question, in the book we've defined 6 new categories of jobs that we believe are the new jobs of the AI future that we're moving into, that really don't exist today, and that define the new ways we need to develop people and train people to make sure that we have the right workforce available for this new age we're moving into.

So it's about these new jobs, training people differently for these new jobs and restructuring work, as we say in the book title: 'reimagining the work' around these new jobs that we're starting to see being created. I can talk about some examples of that if it's helpful.

Yeah let's do that.

Okay, so one example is, there's one set of jobs that's about people who are needed to help AI operate in the right way and more effectively. We call these trainers, explainers and sustainers, -- they all happen to rhyme, which is convenient for memory. Trainers, explainers and sustainers. And these are new human roles, not necessarily technology roles, but where people are needed to manage algorithms. A trainer for example, an example of a new trainer role that we're hiring into my company, Accenture, are behavioral and personality trainers, for chatbots and virtual agents, behavioral training for the chatbots.

[The new trainer role is] not technically programming them, but it's deciding, how do I want my persona represented through a chatbot or virtual agent to my consumers? Do you want it to be a little snarky and have an attitude, a little bit of an edge to it, or do you want it to be more conservative? In many many questions like that, these are skills that involve sociology, understanding of your customers, linguistics, many different dimensions. We have people with poetry backgrounds that tend to be good at this type of work, and it's a new job. It's not tagging data, that’s the basic form of training for AI, it's about the behavioral training that is going to represent the brand of your company on the front lines.

We have a phrase we use in the book that we talk about with our clients a lot, which says that AI is becoming your brand. AI in the form of virtual agents, chatbots, the way you respond to customers, is your brand to the company, and you have to think very carefully about how that brand is being manifest to your consumers. The trainer job is one category of jobs.

Another category of jobs you talk about are the explainers. A good example for the need for this job is what we just saw with Uber, with a tragic incident in Tempe where the Uber car killed a pedestrian. If you look at Uber's response to that, which I think they had a very good response to this overall, but it was about explaining what happened, doing the root cause diagnostic, understanding what the problem was.

If you followed the recent articles and commentary from Google, what happened was, that algorithm thought that the pedestrian was a false positive and didn't use the reaction time to slow down in time, because the algorithm was tuned improperly. What we need our new roles like explainers and sustainers [to do], human roles that understand the implications of the algorithms, [is] adjusting these algorithms dynamically, so that you avoid incidents like that, and when you do have an incident, you respond very quickly, and improve as a result of it. That's the explainer and sustainer roles we're talking about.

So we believe these aren't just individuals—one or twosies here and there in organizations. These represent hundreds of thousands and millions of jobs. That's the scale across the economy around the world, and these are the jobs of the future as you look at AI.

And I'm with you with this ‘human +’ model for a lot of things, but in some cases, you know it will just be an AI doing something, and in other cases AI's going to create hitherto unimaginable jobs, like the ones you're describing. How do you think that's all going to net out over the next decade or two? Do you think that we're going to have a big turnover in the kinds of jobs, or it's overstated in the media? Do you think there's going to be transitional pains, or this is like every other innovation that's come before in terms of its impacts, we adjust and it morphs into a new thing and then it happens again and again and again...?

It's a great question. I think frankly that's the biggest question we face for our generation coming here. Our view through our research, through our experience—we're working with thousands of companies—and through what we talk about in the book is we're cautiously optimistic on this front. We believe the issue isn't jobs, and that, based on our experience, and a lot of the other research that's out there, independent research, we'll be able to continue to create lots of jobs.

In the US alone, as a data point, there's 6.5 million unemployed people; there's roughly, a little over 6 million open jobs, close to the same number, and we created 170,000 jobs last month. According to the jobs report, we're creating about that number every month, and we already have pretty strong employment, so the issue isn't the jobs per se, it's how do we get the people who lack the right skills to do these jobs [reskilled]. So that's why we're not fully optimistic, but we're cautiously optimistic.

We believe the jobs will be there, but we're not on the right course to make sure all the people are prepared. So there will be more displacement as you said. In the short-medium term, there's certain occupations that'll be at risk for more complete automation, and for those professions, those are added to the unemployed. How do we make sure we're pro-actively reskilling those people in the right way? I'll come back to that in a minute, but, that is what we believe is the biggest issue and one big reason we wrote the book, and one thing that we talk about in the book is we're donating all the proceeds of the book to non-profits, who are focused on mid-career reskilling, and we're donating all the proceeds because we believe that that's the area that's not achieving the right focus and right investment at this stage.

There's a lot of focus on, not enough yeah, but at least a lot of attention on K-12 and higher ed and community college and apprenticeships, and lots of different models which is great. But we need to think about the people in the middle of their career who are displaced, whose job is automated. How do we use technology and think of creative new programs to get those people productive? We don't have all the answers to that, but in the book we paint out the direction that we need to take on those issues, and again we're donating the proceeds to organizations who are dedicated to solving that problem.

This sort of mid-career disruption though, is nothing new. When the assembly line came out everybody who was crafting things one at a time, they were... the assembly line had to look like an incredibly frightening kind of artificial intelligence for a craftsperson. When in a very short period of time, we replaced all animal power with steam power, that was a huge change in the professions of a lot of people and so forth, and there wasn't this [worry about] how are we going to help people re-skill.

Now we're in this age where it's in a sense, never been easier, right, because most people have access to the internet, and therefore they have access to any number of online resources that are free to learn new skills. So why in an era where we actually have more connectivity, do you think we're having in your view, more problems getting the right skills to the right people?

Yeah, another very good question. And just to your point on the assembly line age, there's a great clip—and I'd encourage you and your listeners to go look at it—just search 'Charlie Chaplin, Modern Times Factory’ and you'll see a clip... [where he] literally gets sucked into the gears of the assembly line. So this fear, to your point, the fear's always been there, the luddites and the back of the turn of the 20th Century into the 1900s this fear of… it's a basic human instinct where we fear this new technology and implications it's going to have and it's going to put us out of work or have adverse implications.

The reason it is something we need to consider more this time than the other times, is the pace with which it's happening, because we have this exponential combinatorial revolution with technology. We believe that we're in this era now where AI's creating more change in the next 5 years than we've seen in the last 30. It's going to disrupt things more than the PC and internet itself, so it's just the pace with which this is coming which means that people aren't going to have a generation to prepare themselves, they only have a few years and in that environment, how do we provide better support for it?

So the answer is: there's a couple of things we need to do. One is businesses—we believe, and my company believes that we as businesses—need to take more accountability for retraining and ensuring the relevance of the skills of our people. At Accenture we invest $1 billion a year in training and our goal is to keep all of our people relevant and skilled in the technologies that are coming. Others like AT&T have made a similar size and very large commitment to the same goal. We're seeing more and more organizations do that. Not enough are doing it yet though.

For our book we did the research we talked about, and two thirds of the executives told us that they believed their organizations and their workforce is not ready for AI. Two thirds say their workforce is not ready! Only 3% say that they're going to increase learning programs for their employees as a result. And that's an unacceptable gap.

If two thirds believe their workforce isn't ready, more than 3% need to be investing in the learning to help prepare their people. If you're taking a view that you're just going to replace the people with others that can do it, we believe you're going to fail, because you're not going to find the people who can do it, because in many cases business is moving so fast that they don't exist.

It's about using your current human capital in the most effective way you can and creating those learning platforms. We also need better community based and public-private sector collaboration on these things, because we're in a different economy than we were with the other transitions you talked about. Fifty percent of the American workforce, to take the U.S. right now, [there are] similar numbers in other countries, 50% of the American workforce is 1099 contingent contractor labor.

It's the “gig” economy, and they work for themselves, and in that environment, the business isn't going to reskill those people. We need other mechanisms to reskill those people, as their gig economy roles become less relevant as AI and other forms of automation come online. That's where we need more public-private sector collaboration—[with] the kinds of organizations that we're working with as we donate the proceeds of this book, and we're working actively with governments, with communities, and with educators on other solutions to that problem as well.

Well I do want to dive a little deeper into this statement you made, because I don't see it. I may be shortsighted and I'm optimistic about this technology and I believe that in the end, AI makes people smarter and there's no downside to empowering everybody with more intelligence.

When you say, 'we're going to see more change in the next 5 years,' presuming employment [is what] you're talking about, than the last 30, we began this conversation talking about 30 years ago, and you listed out all the stuff that's changed in the 30 years since you joined the company. Put some meat on that.

I want to know what you think in 5 short years from now is going to be as disruptive as the last 30 years of technology that we have experienced. So give me some solid things that will happen in 5 years that you think are going to collectively outweigh the last 30?

I think we're going to see it in different parts of business in different parts of the consumer experience and how we use technology. So one thing to think about is the smartphone itself. You know the iPhone is only 11 years old now, and think about [in] 11 years how dependent we've become on that technology, and how many business models have been reshaped around the smartphone. That's just 11 years from now, and if you think then about the next generation of interfaces that we're developing, we have the Alexa, the Cortanas, the Google Voice etc, whose capabilities are moving incredibly fast, and we're increasingly relying on these new voice operated agents and different technologies to do more and more of what we could do.

So if we look even 5 years out I think we're going to start seeing a bigger shift where the mode of interaction isn't going to be us typing with thumbs on keyboards, fingers on big keyboards or thumbs on our smartphone keyboards—which is a primitive interaction—but we're going to be having much more human-like interaction with our technology that's going to transform our interaction in the way we do our jobs and work as well.

In call centers for example, we will have people that are going to be using bots that sometimes answer calls directly, but advise them how to better answer calls, wingmen to help them do their jobs more effectively. In maintenance jobs, we're already seeing the change in manufacturing, people who are doing maintenance on things like jet engines or wind turbines, rather than just going out and doing a manual checklist and checking the physical equipment, they're using AI powered digital twin models to in real time make dynamic decisions, fueled by an internet of things type of information feeds powered by artificial intelligence that can do future modeling of what might happen with that equipment and push big frontline decisions to individual technicians, rather than have them happen in the middle of an organization.

So dramatic changes in jobs the way that work happens we're starting to see that'll happen more broadly as we go through the next several years, and that's what's going to create a lot of these disruptive impacts and it means that we're going to start seeing more of these job impacts than we collectively have over the last several years.

So, like every technology, AI is inherently neutral, it's how it's applied, and again I'm an optimist about it. I think that on net, there are more people that want to build in this world than destroy, and so on net technology is applied to good more often than not.

But do you worry, because I'm already thinking back to your earlier comments about the anti money laundering, the ability to go through so much data, and do you worry about the privacy implications of that, once every conversation can be understood, and our cameras can read lips as well as a human, and every email can be read, and all of that, that that same technology will be used to curtail individual liberty and freedom?

In the past, we've always had some amount of anonymity just by virtue that we all live in an incredibly large ocean of data that nobody can mix this up, and that same tool that can spot the money laundering can also spot political sentiment as well. Do you worry about that application of the technology, and how do you think we mitigate against that?

Well to be clear just with where you start the question, I'm very much an optimist on the power of the technology and where we’re going as well, and where we're improving human capability to use technology, and I talk about (in the book) [how] it gives us superpowers. AI’s giving people super powers in the way we work and live, and we're going to solve societal problems around agriculture, sustainability, better health outcomes, reduced disease, that we couldn't imagine solving. So [I’m] clearly an optimist about it. My only point on the jobs and skills is, we just have to make sure we're bringing everyone around and along in an inclusive way to benefit from the changes.

In terms of the privacy thing and some of the risks around AI, yeah again there are some real issues that we need to think about more carefully than we've needed to think about with prior technology. We call it broadly the area of responsible AI. Privacy's an element of it, and we talk in the book in a couple of chapters about a new obligation of companies to have very specific codes of responsible AI and how they're applied in technology. Privacy's an element of it about how you're respecting the data. Bias in data is another big issue. Are the data and algorithms you're using biased at the core, because all data has some bias in it, are you scaling unforeseen or unanticipated consequences of that in a biased way?

There's been many examples of companies getting burned by that in the way that they've applied AI. There's a transparency and explainability obligation that would be of importance especially with the GDPR laws that are going into effect in Europe, and a few other things that we group into this responsible AI category. I think they're all solvable, so I'm an optimist on them, but they're only solvable if leaders in organizations give proper attention to putting the right guard rails in place and understanding the issues, and dealing with them properly.

And we think what's going to happen is that responsible AI is going to become a differentiator for competitiveness of companies. Those that follow responsible policies are going to have more trust from their consumers, and that trust is an underutilized value currently that's going to have huge value going forward. Think about the AI algorithms that we're talking about, that are monitoring the real time personal healthcare, they're using your personal genomic information to offer you better medical insights. You're going to have to have immense trust in who you're giving this data to, to get these AI-powered services that you want back and we're going to do that with organizations we trust and those that have demonstrably better responsible AI practices, we think, will have a competitive advantage, and we've seen in our research that we talk about in the book, that there's evidence that those companies taking that approach do have kind of a higher return as they look at applying these types of values. So this issue of trust and applying data in the way that instills more trust, is, I think, a basis for competition differentiation among companies going forward.

It's interesting that you're saying the market's going to solve for it, so you don't really see a role in government legislation of responsible AI?

Well, I was at the White House meeting last week of AI and industry where the White House convened a set of leaders, I’m spending more time in both houses of Congress, with both sides of the aisle, I was in Canada last week, as in other countries talking about the same issue. I do believe the public sector has a role to play. I believe we need our public sector, our politicians, our policymakers, the agencies and such, more educated on AI so they can guide more effectively.

So I do believe there's a role for government. I believe it's too early to regulate AI. I don't think we'd know how to define or to regulate it right now in a way that wouldn't impede progress. But I do believe that there's a role that government can play in setting direction in a safe way.

A great example is what the Food and Drug Administration (FDA) in the U.S. did a little while ago, where they in essence had a similar thing. They can't regulate AI, but they recognize AI is a very important tool for the food and drug industries, the industries they regulate, so they provided a framework and guidelines for organizations on how to use artificial intelligence and things they should be thinking about, which was empowering for industry because rather than fearing any response from the FDA, they now have guidelines on how the FDA thinks they should be using artificial intelligence. That kind of guidance and cooperation between public and private sector, we think is really instrumental in making sure that we're not slowing down the progress of applying AI to business and solving these types of problems.

You made a statement earlier, [and] I wrote it down because I wanted to come back to it. You said in the quote, 'all data has bias.' I'm curious if you would explain that, because in theory if the data is true, the data is 100% a reflection of reality. I could see maybe you say how we interact with the data, but how do you think data itself, which is presumably truth, is inherently biased?

Yeah... I’m a believer in being data-based in how we ground decisions and think about things and I use the phrase ‘data is ground truth for how we think about things.’ But at the same time, when you use data in that way, you need to realize and understand the context of any data you have. So if you've got a set of consumer data, it's got certain demographic representation of the consumers you have in whatever part of the business you collected it from, and in terms of the way you've modeled that data and collected that data etc. So if you take that data and apply it to a whole different population, jurisdiction, demographic set [of] customers, you may tailor your services in a way which disadvantages certain sets of customers, just based on taking the data from one place to another, so, that's the kind of thing I'm talking about when I say all data is biased.

All data represents context, and if you apply it to a different context, it's going to reflect the nature of the context in which it was developed. There's human bias that we all have, we all have our own biases individually and that's all reflected, or in many cases reflecting the data we have too. So as we apply AI, we need to make sure also using AI to test for bias. Some very interesting research we're doing using things like adversarial neural nets to actually test the veracity and objectivity of the data that's coming out of the algorithms that we're producing. I think that's a very interesting area for the future.

So here you are writing books and advising presidents and traveling the world and all of that, and it turns out you also have a day job, a very demanding one it would seem. Talk a little bit about what you do at Accenture, and what you're working on and what excites you and what your role is and all of that.

I've got a title in the role of Chief Technology and Innovation Officer at Accenture, and just for those who don't know who are listening, we're a $35 billion revenue company, with about 440,000 employees around the world, so it's a big organization serving 75% of the Fortune 500, so that's the work we do. My job is to look at how we continually change our company, move our company to be more relevant with the new technologies than what's coming before.

The challenge for us is, you think about if we're $35 billion in revenue, and we're growing, say close to 10% a year, which is what we've been growing at (in that range, a little less) but we're adding about 2 or 3 billion dollars of revenue a year, and generally the new stuff we're doing is on technology that wasn't really at scale 5 or 7 years ago. So we're creating billions of dollars of new business every year based on new technology, and so the ability to innovate and stay on top of the emerging technology is really what I focus on. It's been areas like cloud computing, and internet of things and those types of technologies, blockchain, virtual reality, and artificial intelligence.

AI's probably our biggest single focus right now, I call it the alpha trend that's bigger than the other trends, but there's many other things we have our eye on. We're also doing work in areas like quantum computing, and biologic computing, biotech etc., that we believe are becoming mainstream and impacting many industries. Those are the types of things I focus on in my job around how do we rotate our business to these newer technologies. At this point, about half of our revenue right now is in technologies we define as kind of that new category of technologies like the ones I described. So we're rotating fast to all these new areas.

Well wonderful! It has been an exciting hour, we have covered a lot of ground, we could have had two shows. The book again is Human + Machine: Reimagining Work in the Age of AI. I assume people can get it wherever fine books are sold. What if people want to keep up with you Paul, do you blog, what is your social media profile and all of that?

You'll find me very active on LinkedIn and on Twitter, Twitter is @pauldaugh, on LinkedIn Paul Daugherty is where you'll find me. From that perspective, I have a website where we're keeping up with the fresh and new ideas on the book, that you can look at as well if you do a search for: 'Paul Daugherty Human + Machines.' Those are all good ways to track me, and our Accenture website has lots of information on all of these topics as well, and my first action this fall is to dive more into your book as well. So I look forward to...

Thank you, it tackles a lot of the same kinds of things it seems like you're thinking about, so, I'll be curious to hear what you have to say. Thanks a bunch for your time, Paul.

Thank you, Byron.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.