Episode 94: A Conversation with Amy Webb

Byron speaks with fellow futurist and author Amy Webb on the nature of artificial intelligence and the morality and ethics tied to its study.

:: ::


Amy Webb is a quantitative futurist. She is a professor of strategic foresight at the NYU Stern School of Business and the Founder of the Future Today Institute, a leading foresight and strategy firm that helps leaders and their organizations prepare for complex futures. Amy was named to the Thinkers50 Radar list of the 30 management thinkers most likely to shape the future of how organizations are managed and led and was won the prestigious 2017 Thinkers50 RADAR Award.

She is the author of three books, including The Big Nine: How The Tech Titans and Their Thinking Machines Could Warp Humanity (PublicAffairs/ Hachette, March 5, 2019), which is a call-to-arms about the broken nature of artificial intelligence, and the powerful corporations that are turning the human-machine relationship on its head. Her last book, The Signals Are Talking: Why Today’s Fringe Is Tomorrow’s Mainstream (PublicAffairs/ Hachette, December 2016), explains Amy’s forecasting methodology and how any organization can identify risk and opportunity before disruption hits.


Byron Reese: This is Voices in AI brought to you by Gigaom, and I’m Byron Reese. Today, I’m so excited. My guest is Amy Webb. She is a quantitative futurist. She is the founder and CEO of the Future Today Institute. She's the professor of strategic foresight at NYU’s Stern School of Business.

She’s a co-founder of Spark Camp. She holds a BS in game theory and economics, and an MS in journalism from Columbia, and she’s a Nieman Fellow at Harvard University. And she- as if all of that weren’t enough, she’s the author of a new book called The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity. Welcome to the show, Amy.

Amy Webb: Hello. Thank you for having me.

So, I always like to start off with definitions. Maybe that can be tedious, but start me off with how do you define intelligence?

Well, I think intelligence in different context means different things. As it relates to artificial intelligence, I think, generally, what we’re trying to describe is the ability for a system to make decisions and choices that would mirror or come close to the way that we make decisions and choices. So, I think, when the term Artificial Intelligence was originally coined at Duke University in the 1950s by Marvin Minsky and John McCarthy, back then the thinking was to try and build a machine that came close to human cognition and whose intelligence could be measured against our own. Five decades later, I think it’s clear that, while we’re able to build incredibly powerful machines, there’s a lot that we don’t understand about how the–a lot we don’t understand about our own intelligence and our own cognition, and I think, Minsky and McCarthy, if they were around today still working very intently, that they would say that intelligence, AI was probably the wrong term to use.

Actually, McCarthy did say that pretty quickly after he coined the term. He said he thought it set the bar too high. But I guess the question I really want to ask is, in what sense is it artificial? Do you think AI is actually intelligent, or do you think it’s something that mimics intelligence the way that, say, artificial turf mimics grass?

Well it’s certainly been billed to mimic what we think we know about human intelligence. And that extends from hardware architecture. The current deep neural net system is a layered approach to machine learning that is built to physically mimic what we think the sort of synapses in our own brains–how they’re transferring and parsing and understanding information. That being said, I think that–I also think that intelligence is probably the wrong term because it’s fairly loaded, and when we talk about intelligence, we tend to–I think we talk about intelligence because we’re trying to quantify in some way our own cognitive power. I don’t know if that’s the right term for AI because I think AI inherently is different. Just because we’ve built it to mimic what we think we do doesn’t mean that from here out it continues to behave in that manner.

Yeah, it’s funny. I’ve always thought all of the analogies between how AIs function, how the brain functions are actually just kind of techno marketing. The brain is still largely a black box, right? We don’t know how even a thought is encoded or anything like that, and so, I’ve often suspected that’s just–that machine intelligence really doesn’t have anything in common with human intelligence. Would you agree, disagree, defer?

Yeah, I mean, I think that some–we’re talking in broad strokes and generalities. So, I think that there are certainly some areas of the AI ecosystem that do a fairly good job of mimicking what we do, so reinforcement learning and hierarchical reinforcement learning is present in dogs and children and adults. So basically, you’ve got a, say, toddler, and you’re trying to teach the toddler correct and incorrect, right? So the correct- I don’t know. The correct word is mommy for your mom. The correct term for the color that is the color of the sky is blue, stuff like that. And through praise or correction, the child over time learns. So we are reinforcing the correct answer, and we are, hopefully, in a very gentle loving way punishing the–guiding the child away from the wrong answer. So, the same is true in how some AI systems learn. And in fact, for anybody who’s been following AlphaGo and all of the work that the DeepMind team has been doing- a lot of what they’re doing is self-improving. It was reinforcement learning and then self-improvement, so.

So, I’ll get on to AI here in a minute. But I always... think it’s really fascinating what we’re trying to do with these machines, so let me ask you one more. Do you think that–we experience the world. We have consciousness. We can feel warmth, and right now, a computer can only measure temperature. And it’s very different things. We have a first person experience of the world. Do you think, (A), machines will be able to achieve consciousness, and (B), do you think that is necessary for it to be truly intelligent, that it has an experience of the world?

Well, a machine doesn’t have to achieve consciousness to have agency, and we’ve already started to see some examples of systems learning through reinforcement learning. Through the existing channels, we’ve seen systems throw out our human strategy and create their own strategy and move forward in that way. I would argue that the machine doesn’t have to develop a sense of consciousness or a consciousness as we experience it in order to have a profound impact on the future of humanity. And actually, continuing to come back to that question over and over again about consciousness, I think is fascinating and interesting to explore, but I don’t think over the next–as we start to hit some of these critical inflection points, I don’t think that it is a practical question to be asking.

Now, that being said, I’m a huge fan of Westworld, which I think did a terrific job even in this past season of exploring themes related to consciousness and agency and freewill and AI. And, even if we at some point build machines that are capable of making fully autonomous decisions that might be indistinguishable from our own, at some point, our human DNA is still present in all of the different devices and machines whether or not they are anthropomorphized. We are still a part of them, and that will continue onward.

So I'm curious. You know because if machines did develop consciousness, even a very rudimentary, it changes everything, right? They immediately acquire rights the minute they can suffer. Why are you so confident that that is so far off?

Well because for one thing, we haven’t seen evidence of that yet. For another thing, I think that, again, we can’t conflate our human consciousness with an AI’s–if that ever happens, with an AI consciousness. I think they’re fundamentally different. You could argue that robots and computer systems are already suffering quite a bit. I mean, think of like anybody in your life that you know who’s not super tech savvy, who’s not running for more updates. Or like running updates on their systems, and they’re causing their computers, the CPUs to heat up, and- or there are plenty of really fantastic videos on the web with the Boston Dynamics robots literally hurting themselves, right? Causing the plastic and the materials to chip away as they work ruthlessly to achieve their stated goal, like opening up a door.

The question really becomes–the question I don’t think is about consciousness. It’s about harm. So, at what point... how do we define harm, and what point do we decide what are the parameters for what harm is, and how do we quantify that? Because you could argue that right now robots and machines are suffering.

But is that really true? I mean, that’s an incredibly provocative statement. So if they’re suffering, they’re experiencing something. Because you say we’re “literally hurting them.” I can see where you might say we’re literally damaging them, but for them to hurt, they have to have a self that experiences that pain. So do you think that’s happening?

Again, I would say that it’s not a productive path at the moment right now to conflate our human consciousness with machine consciousness. Because at some point, even if something like that does exist, which I do think is fairly far off in the distance if it ever actually happens, they’re going to be fundamentally different things. So, that means we’re imposing our interpretation of suffering and harm, which by the way, is going to vary from person to person drastically with somebody else’s, and I’ll give a very clear example. I broke both of my ankles falling off of a stage a couple of months ago, and if you’re a fan of the Foo Fighters or of Nirvana, you might remember Dave Grohl at some point also falling off of a stage, breaking his leg. It was in Sweden a couple years ago. So like Dave Grohl gets back up, finishes the show as a medic is wrapping and splinting his leg. So I was like ‘Dave Grohl can get up and keep rocking. I can stand up and continue to do my thing,’ which as it turns out maybe wasn’t the best move. So I wound up having to have ankle surgery, broken bones, really bad stuff.

I was in a lot of pain. But the amount of pain that I experienced and the way that it interrupted my life and the ways in which I chose to deal with that pain are quite different I’m sure from other people. Because in my case, I took no medication. I have a fairly high pain tolerance threshold, and I chose to power through. Somebody else might’ve looked at me and my circumstance and said that I was suffering gravely. I would say that I was in grossly inconvenienced, but that my cognition was fine. Like my day-to-day was okay. I was kind of in a grumpy mood. But, so do you know what I mean? The problem with when we think about consciousness and machines and suffering and machines is that we have to somehow get to an agreed upon definition of what suffering is, and I quite honestly do not think that that is–that we could possibly come to a granular enough level description that would satisfy everybody in order to make that definition even meaningful.

Fair enough. And, I’ll just ask you one more question along these lines, and then we can move past it. But I mean, we have laws against animal cruelty. Not because we feel like we understand how animals suffer, but we believe that they can suffer. And we don’t believe bacteria suffer, and so we don’t outlaw antibiotics. But we feel like dogs suffer, and therefore, we outlaw cruelty to them. Wouldn’t it be, though, if there were even the possibility that machines could experience pain–I mean like, if somebody said–if there was even that possibility, aren’t we at some level morally culpable to just say, “yeah, well, I can’t really quantify it; therefore, I’m not going to put it in the moral equation? I’m just going to keep plowing through it regardless. Even though they feel pain differently than I do, I’m just going to discount that to zero.”

Yeah, again, I think what you’re really getting at here is regulation. And the problem with defining some of these things is that, even culture to culture, religion to religion, again, everybody has a different understanding of what animal cruelty is, and that also isn’t static. Our ideas shift and change as time wears on. So, the problem with imposing a rigid structure on a system that is rapidly evolving is that, (A), we’re not going to be able to find a common definition, and (B), we wouldn’t be able to enforce anything. Let’s say that, for example, we know that there are enormous fleets of robots that are going to be in retail operations doing last-mile logistics, all kinds of stuff all over the world. And those machines are basically going to work without a break, and the only time they’re going to be taken off line is if they have to recharge or if they go out of service for some reason. So you could argue that, at some point, some of these systems–if it’s the case, which, again, I don’t think it is anytime in the near term, but if it’s the case that a system like this at some point develops what somebody defines as human level cognition such that it senses what we believe because we would’ve taught it this - pain, pleasure, etc., then the question becomes–then you would have to quantify what that suffering looks like in order to figure out at what point is that suffering too much? Because all of us, you could walk up to anybody on the street, and I’m sure that they–either day-to-day or throughout the month, that they are being forced to do things or put in situations where they feel that to some degree they are suffering.

These are complicated questions, and again, they’re fun questions. Like this is a fun–this is a weird, fun conversation. I haven’t had one like this yet. So I love doing this, and I think it’s a good thing to do. And it’s good for us to continue talking about, but there are also some real serious pressing issues that should be ‘top of mind’ that are going to take some time to sort through.

Fair enough. So let’s launch into that. For the readers that aren’t familiar with The Big Nine, can you talk about why you wrote the book and what questions you’re asking and how you’re answering them?

So I’m a quantitative futurist. My job is to–and I run a small or a little small but mighty independent research and advisory firm, but we advise Fortune 100 companies and parts of the Federal Government, parts of the Pentagon, and lots of organizations all around the world. So my role is to gather data, model that data, look for patterns, look for emerging trends, and try to calculate in some way the trajectory of change so that we can eventually model plausible scenarios not to tell fantastical stories about the future, but rather to try to reduce uncertainty. Because the challenge for many leaders and everyday people, really, is that we’re living through a period of time in which there’s tremendous amount of uncertainty. And people are very anxious, and so they’re starting to make bad decisions. So if there’s a way to sort of narrow that gap, that’s what a futurist does. We don’t make predictions. We make connections.

And in my role doing this over the past, I don’t know, 15 years, we have transitioned. Suddenly, there is a lot of capital flooding into AI. We’ve started to see commercial viability and research and products that have resulted, and we started to see some serious geopolitical shifts. And in the middle of all of this, I kept coming back to the same nine companies over and over and over again. That somehow, while the AI ecosystem is enormous and sprawling, most startups, most money, most people, at some point, all roads lead back to nine. Three are in China. Six are in the United States.

The three that are in China are not household names in most parts of the world because you would never have an opportunity to use them. Those are: Baidu, Alibaba, and Tencent. Baidu is kind of like Google in that it’s an enormous search engine, search giant that also has many other tendrils. For example, just like Google, Baidu has a self-driving car unit. And in fact, they have a fleet of–that they’re working fairly quickly now on fleets of autonomous last-mile logistics vehicles, so that’s Baidu. Alibaba is similar to Amazon in that it’s an enormous search–or it’s an enormous–it’s also becoming an enormous search engine, but it’s an enormous e-commerce platform and also payments gateway. And it also does many other things. Tencent is like one part social network, one part eGames and eSports, Behemoth, and also is in payments and also has a health division. So these are Chinese companies that are technically independent, but because they are domiciled in China, they must function in lockstep with Beijing. And as it happens, the president of China, Xi Jinping, is incredibly smart and gifted in governing and has been in the process of consolidating power. And artificial intelligence is one way that he is doing that now, and it is part of China’s roadmap to create, I think, a new geopolitical and geo-economic world order with China at the helm. So that’s China.

In the United States, our AI, our sort of big tech companies because we’re a free market economy and these are publicly traded companies, are really driven by the commercial market space, so that’s Google, Amazon, Microsoft, IBM, Apple, and Facebook. The acronym works out to the G-MAFIA. And these companies unfortunately have to prioritize speed over safety, and that’s because, again, they are–they have a fiduciary responsibility to their shareholders. Their shareholders demand returns. There’s a lot of promise. There’s a lot of money being thrown in. And so this rush to commercialization, even if that means that, that in the longer term, there are some negative consequences as a result is what’s driving progress and change, which means that AI is on two different developmental tracks. And Huawei, Nvidia, there’s like hundreds of companies that are doing really great, interesting, important work, but they’re doing that in a narrow way.

And at some point, it’s the big nine’s datasets. It’s their custom silicon, their frameworks, their patents. They also have the best talent because they have the best partnerships with universities. And in the case of China, they’ve got part of China’s sovereign wealth fund helping to fund advancements. It’s these companies that are really driving the future of artificial intelligence.

Okay, and the problem with that is you say that this could warp humanity. What do you see there?

I actually believe that these nine companies or at least our part of the big nine, which are the six companies based in the United States, that they are our best hope for the future. Because most business systems and services, our Federal Government, they all depend on these companies. The problem is that there’s no incentive to continue building AI and to treat it like a public good. And so, with the exception of Facebook, I don’t think that any of these companies have ill will or are intentionally making decisions that they know will harm us. I think that we’re just stuck in this bad situation. And so I think our best way forward–because I’m concerned about our current path, I think the best way forward is to encourage collaboration to avoid traditional regulation. Because I don’t think it’s going to work in a case where we’ve got so many different moving pieces and parts. I think having a global body–one of the things that I recommend in the book is something called GAIA, the Global Alliance on Intelligence Augmentation, which it would be something brand new and function in a similar vein to the IAEA.

I’m not conflating AI with nuclear weapons, but it would be nice to have an international body whose job it was to set normal–global norms and standards, to enforce those norms and standards and guardrails, to perform audits, to do penetration and risk testing, to clean up the bias that we know exists in many of the biggest training datasets, to do all of this work and to come up with compelling reasons for these companies to be collaborative and to make their systems interoperable. Under the present circumstances, there’s no economic incentive. Even social pressure apparently isn’t enough to move the needle on any of this. And as we are talking today, China is going in a totally different direction, one which could prove to be an existential threat for all of us into the farther future.

What do you mean an existential threat?

So China is in the process of and has been for years building strong economic ties with emerging markets, and we have failed to look at China as a militaristic, economic, and diplomatic pacing threat. And as a result of that, China has made inroads throughout Latin America, Africa, parts of Europe, and has started to gain fresh new alliances, and to some extent, they’re friendly. However, China’s been accumulating debt service in exchange for building roads and bridges and helping shore up and bolster some of these emerging markets.

The concern that I have is that, at some point, it wouldn’t be difficult for China to lock everybody else out of their system. So China has something called the Belt and Road Initiative, which is an economic stimulus, incentive along the old Silk Road route. And they’ve been building bridges and building roads, but they are also deploying 5G and telecommunications infrastructure. And there are 58 pilot countries that are part of this already.

So in the somewhat near future, China is going to have a significant amount of influence, like a significant amount with many, many, many countries all around the world; countries which I think could wind up being locked behind a bamboo curtain at some point. And that theoretically, you might argue that our economy is too big and we’re too powerful ever to have a real problem with any of this. However, China has scale on its side. And at some point, if they’re able to create a large enough market with enough people and enough devices to buy stuff, that cuts us out of the equation, and that could vastly change our way of life, how we do business, potentially even where we’re able to travel. On that happy note…

Well that’s all- there you go. I mean, that’s all fascinating. And of course, the book goes on and talks about social credit, and it talks about military use of these technologies and about the biases that you mentioned earlier and all of the rest. And time doesn’t permit us to get into those, but I would encourage people...the book is called The Big Nine by Amy Webb. Amy how- if people want to keep up with you, what’s the best way to do that?

Yeah, so I’m all over the usual social channels, @amywebb, or futuristamywebb, or amywebbfuturist, depending on where you’re at. And I should also, I guess, end on all of my research, all of our toolkits, everything that I do with the Future Today Institute is open source, so if you’re interested in this stuff, we have an annual report. You can go to our website, which is futuretodayinstitute.com. You can access our annual report there on emerging tech trends, the tools that we use to do our foresight work. All of it is freely available. It’s all open source, and it’s all available for you to take and use.

Well thank you so much for being on the show. If you want to pick up the conversation where we left off, we’d love to have you back. Thank you.


All righty, right on time, you got one minute left to get to your next thing. Thank you for your time, Amy.

Yeah, thank you.