Episode 97: A Conversation with Alexandra Levit

Byron speaks to futurist and author Alexandra Levit about the nature of intelligence and her new book 'Humanity Works'

:: ::


A former nationally syndicated columnist for the Wall Street Journal and writer for the New York Times, Fast Company, and Forbes, Alexandra has authored several books, including the international bestseller They Don't Teach Corporate in College.


Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Alexandra Levit, she is a futurist, a managing partner at People Results and the author of the new book, Humanity Works. She holds a degree in psychology and communications from Northwestern University. Welcome to the show Alexandra.

Alexandra Levit: Thanks for having me Byron, it's great to be here.

I always like to start with the same question. What exactly is intelligence and so by extension, what is artificial intelligence?

I believe I don't have the scientific definition on hand, but I believe intelligence is understanding how to solve problems. It is possessing a degree of knowledge in an area so we can have what we consider to be traditional intelligence, which more is in the realm of knowing facts and knowing a particular area of expertise. We can also have emotional intelligence, which reflects your ability to understand people's motivations, their emotions and to interact with them in a way that is going to form productive relationships. So there are different kinds of human intelligence, but at the end of the day, I think it's really about having the ability to figure things out, to respond accordingly and to get a hold of a body of knowledge or sensations or feelings and use them to your advantage.

Now when we look at artificial intelligence, we are referring to not the human ability to do this, but a machine’s ability to do this, and the reason it's called artificial is because it's not human. It's something that we’ve built. It's something that we've created and the machine then has the ability to at least somewhat act autonomously—meaning that we might program it initially but then it can make decisions, it can complete tasks based on what we have given as general guidelines or parameters.

So I hope one of the reasons that this discussion is interesting to your listeners or readers is that I think probably everyone who is listening or reading has their own opinion about what artificial intelligence is and in fact we could probably spend the whole hour talking about that because obviously there are different books, but that that's my take on it.

Absolutely. And to be clear, there is no consensus definition of what intelligence is to begin with, even before all this came along, though. I wholeheartedly agree, I don't think of the question so much as an ‘angels dancing on the head of a pin’ type thing because in the end as we get closer to talking about your book, I think we're really interested in how humans and machines are different. So when you were talking about human intelligence, you used words like ‘understand.’ Can a computer ever understand anything?

I think so. You look at some of the early experiments that were done around chess with IBM programming Watson to do chess games, and you see that the algorithm was able to understand the game of chess and was able to determine what moves to make in what sequence. That it required a possession of the game and being able to really manipulate the rules. So it wasn't just that the machine was completing a set task or as set series of tasks in fact. And that I think is what makes artificial intelligence different than just ‘we’re going to program machine to do X, Y and Z.’

Artificial intelligence requires a degree of autonomy and that's something that we're seeing today when we program algorithms that we have not seen in the past. And Byron, of course you know this—we’ve been working with computers for a really long time and they've been helping us out with things for a really long time. But it has required direct human intermediation at every single stage. If this machine is going to do one thing, you got to tell it what to do next. And that’s where the whole field of programming code arose. It's just like OK we're going to write this code and the machine is going to do exactly that.

Artificial intelligence is different because then they almost start to think for themselves and build upon their primary knowledge. My favorite example of this lately—I love to use this example because I think most people know it—and that's Gmail. Most people have Gmail and most people have noticed in the last couple of months that the artificial intelligence behind Gmail is really learning from your experiences writing e-mails and archiving things and adding attachments and putting things on your calendar. It starts to anticipate what you're going to do based on what you've done in the past. And so I've got my Gmail finishing sentences for me in e-mail in the correct way. I've got it asking me where I want to put something, in what folder is correct. I've got it saying you know you mentioned the word attachment but there isn't one. Do you want me to attach something? And it's not to attach anything, it’s that the algorithm has guessed the correct attachment. And every time I use it as it goes on it gets smarter.

And to me that's one of the best, most salient examples of artificial intelligence it's learning from its experience working with me. It's not just doing exactly what I told Gmail to do. And I find it fascinating and I love sharing it because I feel like virtually everyone has Gmail now and has experiences over the last couple months and then like “Wow, how did it know that?” And this is AI.

But still, the computer is just a giant clockwork. I mean, you just wind it up and the AI just does its thing. If e-mail copy contains “attachment” then suggest perhaps attachment, scan text, file type doc. I mean there's nothing particularly… the computer doesn't know anything any more than the compass knows which way is north. I guess if you think the compass knows how to find north, then maybe the computer knows something. But I mean it's as dead and lifeless as a wind up clock isn't it?

Well I think you bring up a really good point in it being dead and lifeless. I think that's a different thing than it knowing. I think it can ‘know’ things just based on other things that have happened. So this is what I call—and again this is not official—but I call it the ‘assimilation of information.’ So it has the ability to determine what happened in the past and know what might happen in the future given that. And so that is a form of knowing. And it is a form of being able to do something differently than what you've been specifically programmed to do.

I think specificity is a really important part of this. And the other thing that I also would take you back on, you know we're going there, is again I think where I talk about the difference between humans and machines in my book has a lot to do with the human traits that machines as of now and probably as a great deal in the future do not have and these are things like empathy and judgment and creativity, interpersonal sensitivity. These are the things that make us different. And it's because until machines develop consciousness they're not going to have anything like a moral compass or ethics or really even the ability to determine if something subjective is appropriate or any good. Even if they're intelligent they're going to be focused on things like the bottom line.

I’ve been using the example a ton lately because again, it's one of those that everybody's familiar with: When United Airlines called that guy off the plane at O'Hare because the algorithm that was governing the flight attendants’ schedule told the staff that these flight attendants in no uncertain terms had to get to their destination or else it was going to cost United Airlines a lot of money. And so what we saw happen is that the United staff just sat passively by and said, “Well the computer tells us we got to move these people” and nobody stopped to pay attention to the nuances of how it’s going to feel to that passenger if we pull him off this plane against his will. What might happen from a reputational standpoint if it gets caught on YouTube that we forcefully removed someone from a plane, and that people have a sort of a repugnancy toward this type of behavior?

And this is I think really where it's important that we keep in mind the difference. I referred to this, and I didn't make up this term, but it's human in the loop wherever there is a the machine that is inserted into a process, there has to be human error at every step of the way checking the process, just like the government maybe it’s not the best example these days. But our Constitution is supposed to be written so the different branches of government check each other.

And that's where I think we need humans to build a machine to not only just program it—figure out how it's going to be deployed; to oversee and manage it to make sure it's serving its intended function; to fix it when it's broken and then figure out what it's going to do next. Even in the simplest applications of automation and you still need that human in the loop to be overseeing things. And it's because humans have that moral compass, that judgment that you don't see machines have because they're programmed to focus on the bottom line. And even if they're smart, that's what they're going to do. They're not going to go ‘well what's going to happen?’ How are people going to react? Is this going to motivate people? They don't know the answers to that.

That's I think really important. We have to be careful not to automate large swaths of our employee population because without the ‘humans in the loop’ bad things are going to happen and we're gonna see a lot more happening like that United Airlines example for people who aren't sure how that ended up… it ended up very badly. United Airlines took a huge hit to its reputation and ends up having to pay that dude a lot more money than they would have paid if those flights had just had to take another flight. So lesson learned.

I hope other companies don't go through the same thing but I suspect that's going to be a painful learning process for companies realizing machines aren't perfect, and they're never going to fully replace human beings. You've got to have your humans checking the power of those machines.

But what you just said was in the end, it cost United a lot of money, right?


And so shouldn't they have done it for that reason?

Well, I mean...

So you're saying you're better at figuring out the bottom line than the computer, because you say “In the end it comes down to the bottom line, they shouldn’t have done it because they had to pay a lot of money out otherwise.”

Yes and no. I like to use that example because I talk to leaders and I talk to companies and I feel like that's something that speaks to them. They want to know why or why shouldn't I do something based on how much money it's going to cost. Now an individual and in fact myself and probably you, Byron, have a different point of view on that. You shouldn't do it because it's morally wrong. And it doesn't matter about the bottom line. And also from a reputational standpoint, it's going to be hugely damaging. That does have dollar amounts associated with it, but mostly like you want to look like you're a good company that does right by its customers. So there's a variety of reasons they shouldn't have done it, but which one you value depends on your position.

If you were that flight attendant that got that seat, you might have had the best customer service in the world. Let me ask you this: I want to go back to this ‘knowing’ thing because I think it really matters. Because if the computer can know something, that implies there is a knower. There's something that ‘knows’ and I think back to Weizenbaum to the whole story about ELIZA back in the 1960s.

So for the listener: there's this guy named Weizenbaum in the ‘60s who made up a chat bot, really, called ELIZA and made it. And you could tell it your problems in a really rudimentary way it would it would kind of draw you out. Why do you feel bad? Why did your mother do to make you.. you know, it is very simple, but Weizenbaum saw people pouring their hearts out to it. And this really bothered him and he ended up like unplugging ELIZA.

Now understand those people knew it was a program and yet they had this kind of emotional attachment to it, and Weizenbaum said, 'When the computer says 'I understand', it's just a lie, it's, there's nothing there's no I there, and there is nothing that understands anything and I guess that's what I'm trying to get back to because I think only people can understand things. And I'm kind of curious how without just playing semantic gymnastics, how you can say computer can understand something?

Well personally I mean this is a fantastic discussion. I’m enjoying it a great deal actually. I think it depends on what you mean, what type of knowledge you say it's going to understand. The example that you use has to do with complex human emotions. So the fact is yes, the way that we have programmed machines today, the abilities that they have, they're not going to understand an emotional state that's that specific. Actually I might go back on that and say that we have seen lately the rise of what's known as affective computing—where machines can in fact recognize and manipulate human emotions that they might hear and be able to pinpoint where that person is coming from, what they might do next.

So this is in its very infant state. I wouldn't say this is being used in a great number of companies but it is starting to happen. But overall I would agree with you, Byron. I would agree that if someone is pouring their heart out like you were a therapist, if you were to liken the machine to a therapist, the machine's not going to be able to really comprehend anything like that. I think we're just again… it goes back to the issue of consciousness. How would they be able to be autonomous enough and have enough experience with the human condition and how emotions work to be able to effectively engage in that?

And I think it's the same as it was with ELIZA today. I don't think we've made enough strides here where that would even be ethical. And one thing I am hearing about though, it's kind of similar… I hate the idea of a machine serving as a therapist or a confidant because I had like my sort of unique moral problem with that being a human.

But I am hearing about things like salary robots where you want to raise at a company said going in talking to your boss you might for example go in and tell a machine or an algorithm present your business case talk into a microphone or even face a robot and discuss why you think you should get that money and not only will the robot examine the business case, but it will also pay attention to your tone of voice, how confident you sound, how frustrated you sound, how likely are you to walk if you don't get that money.

You can think about all sorts of interesting business applications for this kind of thing. Productivity wearables is another thing where you've got a device that pays attention to how much you're talking with other people, how much you're smiling, how much you're not working at your computer, are you disengaged and when you're disengaged, what are you doing? And we're starting to see a smart machine make recommendations for behavior based on what they're observing.

But still I think what you're talking about is a very complex series of human traits that are needed to really effectively manage that situation. And I've got another example of what you're talking about. What Japan tried to do with their nursing shortage and what was happening here was that Japan realized, "Oh God we're training nurses but we don't have enough to meet the demand of our aging population," which is by the way a problem that most developed countries are having now. And so their answer to this was not to train more human nurses but to build a robot nurse and they called it RoBear. A six foot tall white bear. And the idea behind RoBear is that it would replace human nurses. Well they realized the hard way that that isn't as easy as they thought, because human nursing is one of those professions that requires a lot of complex emotional engagement.

So as an example, nurses might have to walk into a crisis scenario and immediately be able to determine what specialists they need to bring in based on who they worked with in the past and what they've seen in the past to solve a difficult medical emergency. They have to be able to sit down with a patient's relative who just received a difficult diagnosis and talk them through it and have some empathy and make them feel better. They have to be able to look into a patient's eyes and ascertain the level of pain that person is in and what degree do we need to raise their medication. These are really complicated human tasks. And so unfortunately for Japan and the hospitals there they realized, "Oh, all RoBear can really do is move people in and out of beds and serve a meal. We need people to do all this other steps".

I think there's a lot of jobs like that where we don't really realize that you can't just have a machine therapist. What you really need is someone who's going to understand and empathize for real. It's true what you said that people do develop emotional attachments to machines. It happens all the time, but I think there's inevitably some sort of feeling of letdown or disappointment when you really think about it you're like, "Yeah that's not gonna really fly." And there've been some interesting movies in the last couple of years that are, Her is one of them, where people will fall in love with a machine. And even in those cases where it's science fiction and the machine does have quite a lot of advanced emotional capabilities, at the end of the day you realize it's a machine.

I think most people are not going to get the type of deep relationship that they want, at least not in the near future. Again all bets are off once we have the technological singularity where we have no idea machines will reach a certain level of intelligence. We can't imagine what our society will be like at that point. And so all bets are off when that happens. But I think in the near future, I just don't see it happening where machines can really know anything about human emotion besides what you said, the basic identification and application of those emotions, which is affective computing.

So you said at the beginning that artificial intelligence is artificial because we built it. And the other school of thought would be: it's artificial because it's fake, you know it's like artificial turf isn't really grass. It's something that just looks like it. You know artificial sweetener isn't sugar, it's just something that happens to taste like it.

Let's say we didn't achieve consciousness, we just really got good at computers reading every muscle, every twitch, reading all your body language, learning from that and a computer could emulate empathy like perfectly. We all know it's just like I said earlier—a giant clockwork, it doesn't really care about you. That's artificial empathy and all of a sudden now. It's artificial and the term that it is not real it's not real.

So would that be okay to you if we made machines that could… the example is always elder care, like they could listen to somebody's stories and chuckle and laugh and say “That was a good one” and say they loved them and all of the rest. Is that okay?

I think it's okay because I think that in many cases it will achieve the purpose, which is you gave an example of elder care. People are lonely, so it's going to be better to have that machine or that robot than nothing. And we are going to have an elder care shortage of human workers throughout the developed world. Just going to happen because the boomers are just a huge generation. I think what we're really seeing the application of what you talk about will be when as the millennials age, they are an enormous generation. The generation just underneath them, Gen Z, is much smaller. So we're going to need those robots to take on some of those tasks.

Now whether it means that you don't have anything else, like, you don't need to have family or you don't need to have social support. I think what's important to me in this distinction is there's just some unexpected things that happen with human interaction. I mean we always talk about humans being kind of unpredictable—like you just never really know how someone's going to react in a situation. And there's some novelty in that, but I think even as we get frustrated by our loved ones, our friends and family sometimes, we're used to that. We've been programmed to feel it back over hundreds of thousands of years of evolution.

But I think that's where we're going to see machines not being as good with the spontaneous side of stuff, you'll be able to completely predict how they're going to react and what they're going to do. And I could see that getting stale, to be honest with you. Also there will probably be no conflict in those type of situations because why would there be? That machine was going to be able to pretty much accurately assess what you're going to do and what you need in most situations. And we'll be programmed to be responsive to that.

I think there’s a great example of this. You're probably familiar with the show Black Mirror, but I think it was the first season… there was a machine that was developed where if your loved one died, you could have them regenerated in a robot, and that robot knew everything that you ever posted online and had formulated and then really pinpointed that personality based on what they what they could find out. But it follows the experience of one woman who lost the love of her life and tried to bring him back in his robot form and the robot looked exactly like him, talked exactly like him and had most stuff down—just like you're talking about. So 90% of stuff was exactly what this dude would have said if he was alive.

But there was something, something missing and some of that was intangible and some of it was tangible in the sense that he'd have like kind of a quirky sense of humor where he'd say something and it would be unexpected, and the robot didn't quite have that down, and at the end she just got frustrated because this guy was so perfect, the robot was so perfect. He behaved perfectly. He didn't engage in things that were irrational and that's just innately not the way relationships are. So I think that there will be a lot that can be served by these robots. But again until they become exactly like humans, I think that there's gonna be something missing and most people will recognize that.

You know you could be right. I have two problems with it though. One is when you use them for interaction like you know from an elder care robot you can just as easily get a kindergarten teacher robot. I worry about the corrosive effect of raising people—children even—on artificial faked emotions like there's nothing true about any of that. It's all just a big lie.

I worry about that and I worry secondly that knowing that robot breaks, you just throw it away and we have kind of clawed our way to something called human rights, and we say “There are certain things you don't do to a human.” And if you start making things that sure seem human, that sure look human, that sure act human, but you just throw them away. Then you push them out of the car you know whatever you have to do. They don't have any moral value, then I wonder if that has a corrosive effect on human rights?

I notice that the device on my on my desk (I can't say its name right now because it will wake up) but I cut that thing off when it starts droning on about the capital of Bolivia or something more than I want to know. I just tell it to stop and I wonder if enough of that makes me do that to people because it sounds like a person, you know?

Yeah, I think you're right. I think that I know that I keep bringing up science fiction but there's a show called Humans where the servants are the robots and that they've started to develop a lot of really complex emotions and that this exact question comes up. The company that makes them keeps replacing the models just like they have with cars—with new models that are even better.

And what happens when you've got something that's pretty intelligent and has relationships with human beings and say, "Oh we're actually going to replace that one, we're going to retire that one and something else". It does get really dicey and I think this is a legitimate problem, Byron, that we are going to have to face. And yes I think you're absolutely right that it will have a corrosive effect on human rights because… where do we draw the line and how do we even tell sometimes whether… Where we're all going to be augmented by biomechanical entities and we're going to have artificial hands and limbs and organs and it's just like at some point,

what makes us human even from just a physiological standpoint?

I think it's just going to get really weird when machines start to become close and I think it's interesting in terms of things like childcare. I don't really know. I'm not afraid to say when I don't know what's going to happen. I don't know how children would react to eventually finding out, "Oh, you know, my caregiver isn't a human being." Kind of like humans develop relationships with animals or it's almost like your robots are a different species and you can still love it. I don't know if I fully buy into this ‘fake’ argument.

Well I think about

It’s an interesting one, it's just different. I don't know if it's ‘fake’ per se, it just depends on your point of view. I think that’s why I love it. This is a great conversation. I hope your listeners are enjoying it as much as I am, because it's really fun.

I think about this case in Japan though, where they made this robot and the robot was trying to...they were just seeing if it can navigate through the mall with a lot of people. And what they found is that kids abused it. They would get in its way and then when it tried to move they would jump in its way and eventually they would hit it with their water bottles. They had to change the programming that if the robot saw a bunch of little people with no big people, in other words, a bunch of kids with no adults, it ran off towards a big person because it knew it was going to get abused.

Later when they interviewed those kids and said, "Did you think what you were doing was upsetting the robot?" 70% of them said “Yes, I think it was causing distress. I think it felt distress because it sure looked like it.” And I guess that's what makes me wonder if sending this mixed message that it's OK to torment something… because if your car is stuck in the mud, you floor it to try to get out. Even if that's not good on the engine, you don't feel morally compromised by this, right? I mean, you'll tear up a set of tires or whatever you have to do to get out of the mud. You don't think it anything of it…

It's so interesting.

The minute you start saying on the one hand you can tear this thing up and do whatever you want to, on the other hand it sure looks and acts like a human, I don't know. Now you've mentioned the singularity a few times. Are you a ‘singularian’ and do you believe that that we will reach this moment?

Yup, we sure do.

So it seems to me the core assumption of the singularity is that people are machines. Your brain is a machine. Your mind is machine. Consciousness is a mechanistic process. And because of all of that we'll be able to replicate it all in machines. So do you explicitly say people are machines?

I think you do have a point about that. I do think that our brain is a machine. Now we can get into a spiritual discussion as to whether there is more to a human being than you are...

But that's really central question, right?

Yeah, but I don't I don't think so because I think with when I'm referring to the technological singularity, at least from my perspective it's that technology will become so advanced that it will transform society in a way that we can’t anticipate. I'm not suggesting that human beings won't still have souls—they won't still live on in some way after death and that will make them different still. I think it has to do with development and pace of innovation and change that will happen at just a more exponential rate. I think, for me, I consider these questions to be very separate.

Well I think that

I will share with the audience that I believe in a soul and I believe that there is more to our existence than just a brain that's a machine that dies and you're done. Some people are like that, they believe that and that's cool. But I'm not one of them, actually.

So when, in a Kurzweilian sense though, the thesis is that we reached this point that you can graft to that the computer will become equal in computational power to a brain and then you know 10 years later, it's a thousand times and ten years later, it's a million times. And it's at that point, you can upload your consciousness, you can be in the machine. The machine will effectively have consciousness.

So you've mentioned human consciousness a few times. I wrote a whole book about whether computers can become conscious. So I'm deeply interested in the topic. So do you believe that computers can achieve consciousness? Can they experience the world? To define our terms very carefully, a computer right now can measure a temperature but it can't feel any warmth. And whatever that difference is, that's consciousness—it's a subjective experience of the universe. Do you believe computers will be able to achieve that?

I think that the odds are in its favor that that will happen. I mean we can't really predict exactly what that will mean but and whether they go beyond consciousness to something else. It could be that this proves the existence of a greater universe outside of our immediate human feel or disproves it. I think that I’d like to look at this—and again it's getting a little bit more into the spiritual side of things, but I like to look at our current human existence as being in a dark room and having a flame a single flame that's lit. And that's what we can see. And that's what we can perceive. And if we were to turn on the lights in the room we would see everything else, we would see the entirety of our existence: what it means, how we fit in, and that there's a lot more going on than just what we can see—or even what we could predict through science.

Science is predicting… you mentioned Kurzweilian, that's predicting that yes, exponentially, we are going to see tech advances get to a certain level where they're going to become far more intelligent in the traditional intelligence sense, getting back to our original question, than humans but are they ever going to become totally human? And I think that answer we can't know because we don't know exactly what it means to be a being in this universe, and whether humans are just computers or whether there's something more. But it could be that machines eventually prove or disprove this. It wouldn't surprise me.

So as I said in your introduction, you recently wrote a book or you probably wrote it quite a long time ago. Recently a book came out called Humanity Works. Tell me why you wrote it, what you're trying to say, what you learned doing it.

The reason, Byron, that I wrote Humanity Works is because I got started in making I guess what what we call in the futurist space ‘foresight observations’ about where the workforce would be going. Around 2003 when I got started in the HR space and where I started with advising young professionals on what they needed to do to be successful in business. And I'm on the tail end just based on my age of a generation called Generation X, which is very smart and small and the generation after that is called the Millennial Generation or Generation Y. And they are enormous, so the people that were just a little bit younger than me were coming into the workforce and they were behaving a lot differently. The companies were ignoring this because they've known always, that 20-somethings were going to be a particular breed.

And I started getting asked how the Millennial is going to impact the workforce as they get older? And I don't wanna brag, but I, a little bit had my finger on the pulse of this very early and so I started getting asked about other things. About how I thought the workforce was going to develop in terms of things like contract workers and the fact that more people are not working full time for organizations. People were mandating customized careers. Instead of going from point A to point B because your boss did, you go from point A to Point B or really do anything you want and seek out learning agility and the development of cross-functional expertise.

I started getting asked how millennials would become leaders and what type of leaders they would be and speculating that they would be more transactional as opposed to hierarchical. And eventually I realized that companies really need to start thinking about this stuff because what happened with the millennials is nobody really thought about it or cared until it was an issue that was on fire, because their whole workforce was rebelling and quitting and refused to do what 20-somethings were supposed to do. And I see a lot of these ‘future of work’ trends starting to percolate up through the market and become important. And we're doing the same thing we do with the millennials, which is ignore them.

So my goal in writing Humanity Works is to not only explore the trends that are happening but what do you need to be doing tomorrow to prepare your workforce for a world where you will have for example, smart machines. You will have machines that need to be inserted into processes. You will have human beings that are working in a variety of structures. It's not just your typical 9 to 6 that we've always seen. You’re going to have people who are working remotely, virtually, supplemented with artificial intelligence themselves like virtual and augmented reality and through telepresence.

And as we've talked about last hour—these are complex issues when you see machines that don't just do basic tasks—get involved in things and so the purpose of Humanity Works is really just to educate leaders and say, "Look, you don't have to overhaul your entire operation here. But here are a couple things that you need to think about and start doing in order to recognize that these things are going to transform the work world in a way that we have not seen. And I think that it's possible if you get in on the early side of some of these trends, that you will be prepared. But if you decide to be reactive and ignore it until it's on fire, well I guess that's your prerogative as a leader.”

But the goal with Humanity Works is really to say “Here are the trends, here's what you do.” And also the title comes from the idea that what we've been talking about all along, is that you will never fully replace your human workforce. Here are the traits that your human workforce has. Here's why they're important. And here are the areas where you need to make sure you've got your humans into the loop.

So I guess from what I'm hearing, some of the book is about a demographic change like a societal change that's happening as a new generation comes up, and then some of it is an external technological change of what's happening. Is it equal parts about both of those?

I think that's a good observation, Byron. What I have done with the structure of the book is I start off looking at what I call the ‘space shuttle view.’ You looked down at the Earth and you say, "All right well how are we changing with respect to who's available to work and why?" So that is the demographic shift happening in both developed countries and developing countries where there simply are going to be more people from countries like India and China that are qualified to work in different professions and therefore places like the US and UK are going to have to export qualified talent from those countries—meaning you're going to have to have people that work remotely, that you have global teams. That's just the way it is.

And you're to have things like the baby boomers who are a very large generation that is entering retirement are not retiring in traditional ways. So they still want to work, but our set up here is not built to sustain that kind of model. It's once you retire, you're out, and you don’t do anything else except maybe go down to Florida and play mahjong and golf. So we look at the demographic shifts and then we move into the changing structures of work in general, so that’s things like virtual work, remote work, flex time, contract workers. Then we look at leaders and what leaders are going to need to develop in their workforces and in themselves and ways that companies will need to change in order to take advantage of some of these trends and focus on the things that are important.

And then finally we talk about the individual. What do individuals need to do in order to be prepared to be gainfully employed for the next…? And for a lot of people working today, it's going to be multiple decades and a lot of things are going to change. And one thing I feel really strongly about is that (I know we're coming near the end) and I'm thrilled to be able to end on this note because it's probably the most important thing I could say related to Humanity Works. And that is: automation is not going to take people's jobs. It will take parts of most people's jobs. So in the latest research from McKinsey I think says that 60% of all occupations will be affected by automation but only about 30% of the tasks within those 60% can be automated—meaning there's a whole lot of things left for humans to do. So that's not my concern.

There's a lot of handwringing around "Oh, humans are to be displaced and mass unemployment." My concern Byron is this: I think the workforce is changing so significantly and take this for what it's worth I know your audience is mostly professionals and so is mine. So I'm not talking about people with basic manufacturing jobs. I'm talking about knowledge workers for the most part.

A lot of knowledge workers are not going to be used to this model of having to work for yourself and ‘eat what you kill’ and make your own destiny. And that's where I have the biggest concern. Not that you won't have work but that the work you have is not going to be fulfilling or it might be too stressful for you because you have to continuously sell yourself, manage your own time, develop very short term relationships that are still fruitful, even though you don't have two years of sitting next to someone to get to know them and become friends. And I think we've been in this corporate structure that we're currently in for 150 years. And this is very, very different and I think a lot of people are going to have a tough, tough time with it.

I agree with almost all of that. I really do. I don't think we're going to have any unemployment from this. Artificial intelligence and automation is kind of like a collective memory for the planet. We all are able to all learn from the data everybody else generates and if that's a bad thing, you kind of have to argue that ignorance is good: ‘It would be better if we didn't have more intelligence.’

And that's the number one question I get when I speak is what should I study to be relevant in the future? And I say it doesn't really matter. If I went back to high school, there's only one class I could have taken then that I would still be using today, and that's typing. I would have never guessed that [back then]. And I agree even further that there's three big skills—and one is the ability to teach yourself new things.

And one is learning agility.

Correct. Evaluate the quality of your own work, and form and unform ad hoc teams, and you just touched on those. So I'm completely in agreement with you. But the last thing you said I want to understand more because I don't usually say this: that people are going to have a really tough time, because I am super bullish on humanity and people's ability to adapt and learn.

What I do today I didn't learn in school. I just kind of learned it as you go along. And why is it going to be so difficult in your mind? Why isn't it that we just always roll with the punches? That's what humans are really good at. And you know we'll handle this like we handle every other change.

Yeah, I think we will. I think you're right. But I don't think it's going to be without its bumps. And I think certain people are going to be better suited to it than others. So for example, now we see a lot of people who jumped out of the corporate world to do their own thing like you and me and have done really well at it. But there's equally a number of people who are like: ‘I might not love working in traditional business, but it's more comfortable for me because I like being told what I need to do. I like to have my mission within the context of a larger mission. I like to be taken care of with respect to things like benefits and extra-curriculars and volunteer work.’ It's a set structure that's very predictable, and I think many humans would prefer that. It just depends on your personality.

But I think what we're gonna see now is a whole lot of people who are thrust into a much more uncertain, evolving world where they've got to take a lot more initiative and people are just not used to it. So it's not that they won't eventually get it. I hope that everyone eventually does. But I think we're in for a period of tumult while people who have been doing this, like you and me, are going to do great because we're used to it.

Other people are going to prefer full time employment that just doesn't exist, because companies are trying to get smart, too. Why should I pay all this stuff, all this overhead for a full time person when I don't need one and frankly the full time person might not be as qualified or targeted to solve a particular business problem than other people. And so you're not going to have the opportunity to get those types of jobs. And so I think it's got to be tough for some people. Some people will do great with it.

Last question: it sounds like you’re overall though, optimistic about the future. Is that the case?

That is absolutely the case, Byron, I feel like you are, too. I feel like we both have an understanding of the value that humans bring to the table and that people were freaked out about the industrial revolution, people were freaked out about cars. Every time there's a major development people think, "Oh human society is coming to an end," but you're absolutely right that it's only improved and augmented our society. It hasn't taken away the role that we can play and I actually see this as a huge opportunity for humans to do more meaningful work, to take out some of the drudgery from the things that we do. And the key is just gonna just be creativity, which we naturally have as humans, but like any other skill it's going to require some honing.

We all know that especially those of us who work in tech we know that not all humans are created equal with respect to interpersonal skills. So those are going to need to be honed. Just because you're human doesn't mean you're great at that. So we have to focus on the things that matter and really place ourselves in some positions to follow the market—exactly what you said about learning agility—to teach ourselves new skills that become apparent. I mean, I never thought in a million years—I was a psych major in college. I never thought I would be taking data analytics courses. I am, today, because it's relevant to what I discuss and who knew? Just like you said with the typing.

I think that we just need to be prepared and leaders have to be prepared to help their workforces develop these skills, even if it's not something they need immediately for the job you need done today. You have to have foresight and recognize it's coming and it will be in your best interest to ensure that people hone the skills that they need.

Well that's a wonderful place to leave it. I want to thank you so much for your time. The book is called Humanity Works and I assume it's available where fine books are sold, as they say.

Absolutely. And thank you, Byron. It's been a really fun conversation and I hope everyone enjoyed it.