In this episode Byron and Nicholas talk about AI, humanity, social credit, as well as information bubbles.
- Subscribe to Voices in AI
- iTunes
- Google Play
- Spotify
- Stitcher
- RSS
Guest
Nicholas Thompson is the editor in chief of WIRED magazine, contributing editor at CBS, co-founder of The Atavist and also worked at The New Yorker and authored a Cold War era biography.
Transcript
Byron Reese: This is Voices in AI, brought to you by GigaOm, I'm Byron Reese. Today my guest is Nicholas Thompson. He is the editor in chief of WIRED magazine. He's also a contributing editor at CBS which means you've probably seen him on the air talking about tech stories and trends. He also co-founded The Atavist, a digital magazine publishing platform. Prior to being at WIRED he was a senior editor at The New Yorker and editor of NewYorker.com. He also published a book called The Hawk and the Dove, which is about the history of the Cold War. Welcome to the show Nicholas.
Nicholas Thompson: Thanks Byron. How you doing?
I'm doing great. So… artificial intelligence, what's that all about?
(Laughs) It's one of the most important things happening in technology right now.
So do you think it really is intelligent, or is it just faking it? What is it like from your viewpoint? Is it actually smart or not?
Oh, I think it's definitely smart. I think that the premise of artificial intelligence, which if you define it as machines making independent decisions, is very smart right now and soon to get even smarter.
Well it always sounds like I'm just playing what they call semantic gymnastics or something. But does the machine actually make a decision, or is it just no more than your clock makes a decision to advance the minute hand one minute? The computer is as deterministic as that clock. It doesn't really decide anything it just is a giant clockwork isn't it?
Right. I mean that gets you into about 19 layers of a really complicated discussion. I would say ‘yes’ in a way it is like a clock. But in other ways, machines are making decisions that are totally independent from the instructions or the data that was initially fed it, are finding patterns that the humans won't see, and couldn't be coded in. So in that way it becomes quite different from a clock.
I'm intrigued by that. I mean the compass points to the north. It doesn't know which way north is. That would be giving it too much credit. But it does something that we can't do, called magnetic north. So how is that really is the compass intelligent by the way you see the world?
Is the compass intelligent by the way I see the world? Well the compass is... I mean one of the issues here is that artificial intelligence uses two words that have very complicated meanings and their definition evolves as we learn more about artificial intelligence. And not only that, but the definition of artificial intelligence and the way it's used changes constantly both as our technology evolves as it learns to do new things, and as it develops its brand value. So back to your initial question, “Is a compass that points to the north intelligent?” It is intelligent in the sense that it's adding information to our world, but it's not doing anything independent of the person who created it, who built the tools and who imagined what it would do. You build a compass you know that it's going to point north, you put the pieces inside of it, [and] you know it will do that. It's not breaking outside of the box of the initial rules that were given to it and the premise of artificial intelligence is that it is breaking out of that box.
So I'd like to really understand that a little more. Like if I buy a NEST learning thermometer and over time I'm like, ‘oh I'm too hot, I'm too cold, I'm too cold,’ and it "figures it out" but how is it breaking out of what it knows?
Well what would be interesting about a NEST thermometer, (I don't know the details of how a NEST thermometer works, but) a NEST thermometer is looking at all the patterns of when you turn on your heat and when you don't.... If you program in a NEST thermometer and you say please make the house hotter between 6:00 in the morning and 10:00 o'clock at night, that's relatively simple. If you just install a NEST thermometer and then it watches you and follows your patterns and then reaches the same conclusion, it's ended up at the same output, but it's done it in a different way which is more intelligent right?
Well that's really the question isn't it? The reason I dwell on these things is not to kind of count angels dancing on heads of pins. But to me this kind of speaks to the ultimate limit of what this technology can do. Like if it is just a giant clockwork, then you have to come to the question, ‘Is that what we are? Are we just a giant clockwork?’ If we're not and it is, then there are limits to what it can do. If we are and it is or we're not and it's not, then maybe someday it can do everything we can do. Do you think that someday it can do everything we can to do?
Yes. I thought this might be where you were going and this is where it gets so interesting. And that was where in my initial answer I was starting to head in this direction, but my instinct is that we are like a giant clock, an extremely complex clock and a clock that's built on rules that we don't understand and won't understand for a long time, and that is built on rules that defy the way we normally programmed rules into clocks and calculators, but that essentially we are reducible to some form of math, and with infinite wisdom we could reach that that there isn't a special spiritual unknowable element in the box...
Let me pause right there. Let's put a pin in that word ‘spiritual’ for a minute, but I want to draw attention to when I asked you if AI is just a clockwork, you said “No it's more than that,” and if I ask you if a human’s a clockwork, you say “yeah I think so.”
Well that's because I was taking your definition of clock, right? So I think what you said a minute ago is really where it's at -- which is: either we are clocks and the machines are clocks, or we are machines, we are clocks and they're not clocks, there are four possibilities there. And my instinct is that if we're going to define it that way, I'm going to define clocks in an incredibly broad sense meaning mathematical reasoning including mathematics we don't understand today, I'll make the argument that both humans and machines you're creating are clocks.
If we're thinking of clocks in a much narrower sense, which is just a set of simple instructions input/output, then machines can go beyond that and humans can go beyond that too. But no matter how we define the clocks, I'm putting the humans and the machines in the same category. So I either agree depending on what your base definitions are that humans and machines both are category A or they're both not category A, that there isn’t any fundamental difference between the humans and the machines.
So you made that that distinction that: ‘either that or you have to appeal to some kind of spiritual explanation for why humans are different.’ But is that really the only two choices we're left with? For instance could the clock analogy be based on a reductionist view of physics that says you can break everything down into progressively smaller pieces?
We can understand what that clock does, and it's just really physics and the spring is just physics and you know we can get it down to its basic component. We can understand it from top to bottom, but is it possible that humans are a quantum phenomenon or that we are some kind of strong emergence that we don't yet understand and we really are different than machines? Can you can you remain true to scientific value, but still say A and B are are different, and people and computers are different?
I think you can certainly make that argument. There are all kinds of brilliant people who have made that argument over time. My instinct… and this isn't something I've written about, argued long about or have an opinion that couldn't be changed by a smart person with new information in a long conversation. My instinct is that what makes up human reasoning can be recreated in machines, possibly not right now, possibly because there are principles in how the human mind works that we don't understand, but that as we continue on this journey of understanding artificial intelligence, understanding where machines can go, we will get closer to being able to match the way our minds work and the way machines work -- not match exactly, but that's one of things that interests me about artificial intelligence.
As we try to understand and as we try to think about what the machines do, we also gain an element of self knowledge, and it also creates an opportunity, an exercise to think through how our minds work and what it means to be a person.
Okay well I'm only going to ask two more questions along this line. Then we can move on to more practical nuts and bolts. I wrote a book where I ask my reader: “You're one of three things. Pick one: you're either a machine or you're an animal, which is kind of a machine that has a life, or you're a human which is an animal with something extra, the ‘player to be named’ kind of thing.”
I've found that of my 80 or 90 guests on this show, all of them but three or four, like it's notable I can count that all but three or four say ‘the machine’ and more than that they are almost like ‘of course.’ I mean it's an implied “duh.” But the interesting thing is when I run a quiz on my website, I find 85% of people say that they're human, they're not a machine.
And furthermore, when I was writing this book and I talked about people who think they’re machines, my editor wrote in the column, ‘Does anybody really think that? Come on.’ So why is there such a divide between people who are in AI and the whole rest of the country about this really basic question? Is it a self selection that you have to think that to get into AI or what? Because it's not a little thing, it's 85% of people say that, and it's virtually nobody on this show says that.
I would say I would say two hypotheses. So one is self-selection or even natural selection, right? That if you don't believe that, and if you believe that there's something fundamentally different about people than about machines, maybe you stop your work before you reach a certain point. Most of the AI experts you have on the show are at the top of their field, maybe you don't reach the top of your field if you don't have that belief. So it is self selection.
And then another hypothesis would be the people you have on this show have a much deeper understanding of what machines do and therefore are willing to… it's not that they are sort of ascribing lesser values to humans, they are just ascribing higher values to machines. But let me turn around the question and ask you now [that you’ve] had this conversation with 80-90 people, and gone through to writing a book. How has your view on this question changed?
I'm perplexed frankly by the overwhelming number of people who come on the show who think that because of my rationalistic scientific way of looking at this, my logic goes like this: we have brains we don't understand, like we don't know what a thought is, like we really don't. We have minds, and a mind I'm going to define as like all this stuff the brain does that seems kind of like overreaching for an organ: like creativity and a sense of humor. Your liver isn't creative and it doesn't have a sense of humor, your stomach isn't.
So we have these minds we don't even understand, and then we have something called consciousness we experience the world. The computer can measure temperature, I get that, we feel warmth and the idea that matter can have a first person experience of the universe. I think we have these brains we don't understand, we have these minds we don't understand, we have consciousness which may be integral to intelligence, ergo we can build all that.
See that's the part that just floors me is that everybody admits steps one, two and three of that logical chain, we don't understand the brain and mind we don't understand consciousness, but we can build it. ‘I know we can build it.’ I think that's an article of faith in the end that that I find perplexing, I really do, and that's why I'm looking for that person who said, “yeah there's a lot about us we don't understand and maybe we can't build it, maybe it's impossible, maybe it's like going back in time. It just can't be done.”
I wonder if - I'm not going to be that person to make that argument - but I wonder whether there is an interesting side argument to that, which is I think probably with infinite knowledge and a lot of evolution and lots of work you could probably have a machine replicate a human mind or experience what the human mind experiences whatever that is. But that also we'll never do that, because there isn't a use case for that and so what will actually be doing as AI improves, and as technology improves, and as technology moves forward, and as the relationship between humans and machines evolves is we'll never be building a replica, we'll just be advancing machines that can do some of the things that a human mind can do, but can just do them better.
So we take it out of the billion parts of what a human mind does, we'll be taking the little parts of it and advancing them beyond human capability in specific machines and eventually over time I don't know whether we'll ever really recreate a human mind.
Well I agree with you that 99% of all the money spent in AI is spent on what you're just talking about. Let's spot spam better, let's route through traffic better. But I can list people that are working on general AI, DeepMind presumably is, Open AI presumably is, the human brain project in Europe which has billions in funding that is Carnegie Mellon maybe. Look I think there are people who like that would be the ultimate achievement to build a general intelligence like you know Hitchhiker's Guide To The Galaxy -- it'll tell us the answer to every question we could ever ask, you know, 42.
That's right, so my wonder is whether that general intelligence actually will -- when Deep Mind says ‘hey we've done it, we can answer every question a human can’ -- answer whether it actually experiences emotion the way humans can, whether it can feel temperature the way a human can, whether it can write music or whether there will be other parts of the human experience, of the human mind, that will always elude Deep Mind or whatever that next AGI is.
Well the interesting thing to me is that you know our DNA. I'm gonna get the numbers a little off here but the right order of magnitude is something like 600 MB but we share half of [that amount] with the banana and we share 99% with the chimp and we're smart in a way maybe a chimp isn't. And so if you take 1% that's 6 MB that gives us General Intelligence, 6 MB of original source code somehow gives us that...
Whoever wrote that was good at writing code.
You know and in theory it's got a bunch of junk DNA in it right?
Really like 3 Megs/ 2 Megs/ 1 Meg?
Well it's funny you say that. Stephen Wolfram thinks the whole universe is built… a calculation and the whole program that built everything was maybe 30 lines long. That would be all you would need to iterate enough times and it'll give you the kind of complexity we have. Well thank you for that whole chat. So now let's begin.
That's wonderful.
Well let's talk about what we know how to do. So narrow AI: WIRED just came out with an issue dedicated to AI, and so give me a synopsis of what's the state of the art right now?
Well I mean the state of the art is AI has been built into… it's becoming more and more just the way computers work. So a lot of what's happening in computing is happening through AI. My particular interest in AI, we have two things: one is a WIRED specialist in AI which has a series of terrific essays on how AI can work, AI and common sense, AI bias, whether AI is going to be something that everybody uses or whether it will centralize power with large companies. These are all really important questions, so those especially in the AI issue.
Then I wrote a story about AI and foreign policy and the question there is it looks like over time technology is certainly correlated (don’t know whether it's causal) but the rise of technology is correlated with the decline in democracy which is the opposite of what certainly I would have predicted 20 years ago. What most people would've predicted (or most of us) 20 years ago [when] it seemed like post-Cold War technology would lead to an increase of democracy and declining authoritarianism.
And so the question that interests me the most right now is whether AI accentuates that trend and whether in particular China's focus on AI makes it more likely that the future we have is one that on balance favors authoritarianism or democracy. So that is the question that interests me the most right now.
When you mentioned China in passing, are you thinking about the use of AI and things like Social Credit?
Yeah. So, I'm thinking about it in a couple ways. One is if you look at the advances in AI in China, there does seem to be an emphasis on technology that can be used for control and surveillance. So for example, [in] the Social Credit system, you can access social credit. We've recently seen patents and technology on lip reading in crowds, like identifying people from the gate. Obviously the image recognition technology that China uses is very effective.
And of course there are also all kinds of uses of AI just built for building the economy and improving the lives of Chinese citizens. But I do think that the use of AI in China for purposes of surveillance and limiting unrest is a profoundly important thing to watch.
So talk a little bit about social credit for the listener who may not be that familiar with it or only familiar with the term?
Yes. The idea of a social credit system is that not everybody in China (but this is being rolled out and it seems like it will continue be rolled out, it is)... Think about your FICO score, which is in the United States a measure of your credit worthiness and it's very useful for determining whether you can get loan. Then imagine taking all the factors that go into your FICO score right? How many credit card accounts have you opened? Do you pay them back? Have you defaulted to black marks? And then imagine adding every other factor you could to them. So imagine adding like, do you pay your parking tickets on time? Have you gotten speeding tickets? Have you given money to charity?
And you can imagine a state setting up a value system and then grading its citizens in every one of those categories. I sometimes joke that it’s a FICO score adjusted for your political beliefs and then not only that, -- and then this is the part that's most interesting to some as far as others -- is that your score gets adjusted by the people you are connected to. So, imagine that if you had a FICO score adjusted for your politics and then adjusted for the scores of all the people you’re friends with on Facebook or that you follow on Twitter, so that you then have lots of incentives to, you have all kinds of incentives to be even better.
So in China a score like this to some degree (not obviously everything but lots of data that the government can't get) is being set up for every citizen, and will have an effect on how the person moves through society. And there are all kinds of good things that can come about from this. You can have a more efficient economy, you can have a benevolent government, good set of incentives for people to treat each other properly. There are all kinds of potential benefits, but the disadvantage of course is that you increase the capacity for state surveillance and control.
Give some examples of some of the uses. What would a bad score mean to me in a practical life?
A bad score could mean that you can't rent a bicycle. For example, if you have a bad score we can make it hard for you to rent a bicycle, or we can make it harder for you to go through a checkout line or we can make it, I don't know, the taxes you pay can increase, right? And you can imagine all kinds of levers that a government can have over its citizens in the way it could use a bad score. Maybe when you and I both commit the same crime and you have a better score than I do, maybe I am sentenced for longer or maybe my bail is set higher.
Is your concern broadly that that is going to happen in parts of the world? Are you concerned that some semblance of that could happen in the US where we're broadcasting from?
My concern is that, that system, as it develops, we're still in early days in China. My concern absolutely, I mean this is not my only concern, but one concern I have is that, yes, that China will export the system. We've recently seen that Venezuela has purchased Chinese technology and gotten Chinese engineers to help construct a national identity card which will have elements of this system built into it. And so you can imagine that China is effectively exporting the system to help authoritarian governments maintain control of their citizens.
We've also seen in China which has developed very strong surveillance technology, selling that to Zimbabwe, as an example, a country that where the government would like to be able to better surveil its citizens. And so they've purchased Chinese technology to do it. And so, one concern that I have is that this technology will be shaped in China in ways that differ from the values I have. And it will then be exported. I mean there's a really interesting question embedded in this, which is how confident am I or how confident is any critic that China exporting the system will make the world worse? Or how ever you want to broadly define that question given how much prosperity wealth and happiness China has created in its own country over the last several decades? I mean you can get pretty deep into that subset of this particular debate.
What is your takeaway from all of that? Do you kind of just say, “Let's keep an eye on this,” or “Let's make sure we pass legislation here,” or “Let's make sure that the behavior that is shaping is consistent with your values?” Are you urging people to do anything?
Yeah. Okay. So I would urge a couple things. So one, I would very strongly urge people to think about this issue, debate this issue, study this issue, hold conferences about this issue, discuss this issue. I think that there's not enough thought in the world right now being given to, “Is technology leading to authoritarianism?” What kind of technology can better counter this trend, to what degree do we care about this trend?
I think, in general, the magnitude of the import of this question to whether we have a functioning prosperous free society in 25, 30 years, 50 years is underestimated. So, I very strongly, completely, confidently argue that we should discuss this more, but that's easy. So, the next question is, okay, so what do I recommend for the technology companies, right? And to the technology companies, particularly American technology companies, I will argue: you need to be thinking very carefully about whether your technology has led to declines in democracy and whether you can counter that.
For example, to the social media platforms or the ways that you have constructed your algorithms that are breaking democratic societies down, they’re pushing us into different filter bubbles that are making our system function less [well]. Are there ways that your country functions that have led to increased income inequality, which has also I think led to part of the breakdown in democracy? And so to them, I would say the specific ones I could probably give some recommendations.
And then the third level would be, okay so, what about for the U.S. government, or what about for other liberal democratic governments? And there I would say, well, I think that you should be pushing AI as a national priority as China is doing. I would recommend opening data sets to help researchers. I would recommend figuring out ways to help society cope with artificial intelligence. I would think about how it will be integrated into the military.
I think that there are a whole series of things that liberal democracy should be doing with technology to help make sure that as we go through this sort of Cambrian explosion of technologies that they're being built and designed in ways that are good for liberal democratic values, good for our economy, good for maintaining social cohesion in these countries. Then as a last policy element -- and this is one where I think this one is specific and important and something to deal with right now. We are entering a moment where the U.S. and China are being driven further apart. Right? Just maybe a temporary relaxation as we're recording this but, in general, it seems like the US tech sector and the Chinese tech sector are diverging. And my fear is that, it's possible that the diversion makes the problem I'm describing worse.
You can make a counter argument to it: It's only by holding a tough line on China that you'll be able to change the arc of Chinese technology. But my general view is that the best thing to happen right now would be more integration of the U.S. technology sector and the Chinese technology sector. So countries building their tech stacks closer and closer together, sharing research, allowing people to go back and forth to the big U.S. technology companies to do more work within China, within the limits that they set up based on their own values. So, the fourth level would be: I do think that there should be more integration between the U.S. and Chinese technology leaders
You fundamentally believe that liberal democracies are more successful over time financially, the happiness of the people, the desire of people to live there. Would they inherently be more successful as countries?
Yeah. I'll say two things to that. So number one is the evidence certainly points to that. If you look over the last couple of hundred years, authoritarianism doesn't work. And the more democratic you are, it tends to lead to economic prosperity. It tends to lead to… how ever you measure human happiness, whether you have more longevity… All the things that you can throw in the bucket of human happiness -- it does seem that liberal democracy does push in that direction. So yes, I do think as a general principle that's true.
But if you look at the last little period of time, this not necessarily as true. It seems like it's possible that in Asia, particularly in China but in other countries in Asia as well, you could look at Singapore for example, that authoritarianism in some form also leads to faster moving economies and in fact, perhaps by some measures, support of people in those countries.
Sometimes, when I have conversations about this topic people say, “Well, you don't need to really worry about that… like U.S. system will win out, right?” It will win out because this is where all the smart people want to go and not only that, like it may be that China's economy is growing much faster right now, but that's a blip and that because of the profound inefficiencies of the centralized economy where the state picks the winners as opposed to the market picking the winners, over time the U.S. model will win out and the best technology will be developed here. You don't really need to worry. Embedded in your question is one of the most interesting counterarguments to my particular concern. But back to your question, I do think that this is a moment to be worried about it.
Well, you're undoubtedly right that the technologies we're building to try to find cures for cancer are the same technologies that look for patterns in people's behavior. So it's like the technology will be built, and the question is simply what we will use it for, right?
Yeah. One of the things that I think is that in 15 years, smart historians who study the way society is organized and the way society functions will look back at 2016 to 2021 and say, “There are some really interesting choices and decisions made about the way AI developed and the way that the machines were set up that profoundly affected the world that we live in in 2030. That have made it so that the most powerful systems in the world are tools of surveillance, are the most powerful computers in the world are tools of creating economic opportunity.
We don't know exactly what the choices are and exactly how to guide them, but yes I absolutely think that the premise of your question is correct: that the technology is advancing and that it can be used for all kinds of things and that certain things we do right now will determine the degree to which it is used for curing cancer, for surveillance, for creating economic opportunity and so on.
Do you think that in the US, when you think about how easy it would be for the government to already do things like: I drive through a toll road and it takes pictures of my license plate like every day, right? And that thing could be looking to see if I'm a deadbeat dad or having any outstanding warrants or any number of things and it doesn't. Do you think that in this country we have… you never want to say “It couldn't happen here,” but is it unlikely to happen here, do you think?
It's less likely here than elsewhere and also depends on what you define as “it.” I wouldn't be so certain that the camera on the street scanning your license plate isn't putting it into a database that then looks and sees whether you're a deadbeat dad or whether it won't be doing that in a little while. This is a very interesting moment in surveillance in the United States because the capacity for surveillance increases exponentially. Every year it increases exponentially.
The number of places where we are photographed, the number of places where we are recorded, the capacity of the machines to look at the recordings and look at the images to parse information about the amount of data collected on us [is increasing]. And so the capacity for surveillance is massively increasing, but at the same time the U.S. awareness about this is also increasing and the commitment by some of the companies to play a really important role in this stack to protecting us from this kind of surveillance are increasing.
Apple is a very good example. The company believes deeply in making sure that through the way it develops its products, it limits surveillance and it does this both because there is a marketing advantage of doing that (the main competitor for Apple phones are Android phones and this is a good way to market a difference between them and it allows you to sort of slag Android), but also [because of] a philosophical belief held by the people at the very top of the company that if you are being surveilled, if you are being watched, you are not able to think your most independent thoughts, you are not able to be your most creative, you're not able to be your true self.
And so that they play a very important role in limiting that surveillance and that's why you see the security on Apple products increasing. The way that the data is being encrypted increasing, the resistance of the company to allow law enforcement even with warrants to have an effective backdoor into their products.
And so you have this moment in surveillance in the United States where there is both much more surveillance and there are also ways to ‘opt out’ of it. To the original question: will we end up at a moment where the AI combined with surveillance technology leads to more and more government control of citizens? We could head in that direction.
We certainly won't go there as quickly as China has or as the way that Venezuela appears be heading and we won't because of our core libertarian values, because of the beliefs of lots of people in power, because our technology companies, but there's no question that what you just asked is one of the most important things to watch in tech over the next couple of years. And it's also it's not an easy thing, right? A perfectly benevolent government acting in full knowledge of its citizens and national interests with no possibility of abuse. In that hypothetical scenario which could never exist, you would want them to have more powers of surveillance, right? You would want them to be able to track terrorists. You would want them to to identify people before they go in and shoot up a movie theater.
The question is whether you trust the government enough to allow them to have all that information and all that capability or where you put limits so that capacity of law enforcement, the capacity of government to help make society function is maximized while limiting the freedom of individuals is minimized. So that's the debate and some of the tradeoffs embedded in it.
It’s interesting, Orwell wrote an essay talking about weapons and totalitarianism. He said that when the government and the people had the same weapons, you have freedom. And when the government has vastly better weapons, you don't anymore. It sounds like you’re applying a variant of that to technology?
Yeah. I think that there's a separate second variable which is that I trust certain governments more than other governments, and so the balance of the technology that the governments have the control over [and] the control over technology the citizens have, is one factor to weigh, and then the other factor is the interests of the state and how much you can trust the interest of the state to be in the interest of the people.
Right. But our privacy has always rested on the simple fact there's too many of us to watch, right? You can't listen to every phone conversation and follow every person. And with AI suddenly you can, our government can, and that does fundamentally shift the power. You mentioned in passing the military use of AI. I assume your position is against that but if it is, make the argument, please.
No. Quite the contrary, my position is in favor of it. My position is that if I were the Secretary of Defense of the United States, I would be trying to integrate AI as effectively as possible into my weapon systems, right? So the United States military uses AI in all sorts of ways. No one has perfect insight into it, but it seems like it's been used initially for identifying maintenance like using systems to track wear and tear, figure out what parts of what ships or what parts of what airplanes need to be repaired. That's a relatively simple use. You can imagine other uses which are things they're working on, [such as] image recognition for drone targeting. You can imagine war planning. You could imagine AI powered fighter jets. There are all kinds of uses of AI in the military, and my position as a United States citizen is that the military should absolutely be working on this as much as possible.
You get into hairy questions when you go to the next level which is, “Should machines be able to make targeting decisions?” Here's an interesting hypothetical which I don't know the framework for it, but if a machine had better targeting capability, better image recognition than humans, should it be able to make a kill decision, should a drone that sees Mullah Omar's truck and has, it believes a certain confidence level, should it be allowed to fire the missiles or would a human have to weigh in on that? And my instinct of course, [for] a kill decision or violence decision -- like certainly you need a human in there.
But then let's take the alternative. Let's say that AI is much much better at image recognition and much quicker, gives quicker decisions. What about from missile defense? Should you allow an AI missile defense system to shoot down a warhead coming at this country? In that case, I would certainly say well, a more efficient system. We're not there yet. I would say ‘Yes.’ And so that gets into a very interesting debate about once we have these AI capabilities, which we don't. But once we have them, how should we allow them be deployed and where should we limit them?
I tend to agree. We, in a sense, have had AI weapon systems for a hundred years. You know like landmines, right? The programming is very simple. If weight is greater than 50 explode, right? Simple. And if somebody said, “Hey, we're gonna make a better one that has a camera and it can tell if somebody's wearing a uniform.” Only then it'll blow up and then we have one that can sniff for gunpowder and make sure they're carrying a weapon and then and only then we explode. You would say ‘yes’ all the way up that tree, right? And right now, we drop bombs and they blow up everything. And if somebody said, “well, we can have an AI bomb that blows up less things we don’t want to blow up.” You say yes all the way up that tree.
Well my question is, why in Silicon Valley is there such a reaction to all the stories of companies that don't want their technology used by the military… like they know all that same logic. They know 14 other countries are working on AI enabled weapons. What do you think their reasoning is? How did they counter what I just said about the landmine?
The specific example where this debate came out in the fullest is Google and Project Maven, [which] is image recognition technology. I'm not exactly sure what it’s gonna be used for and so I don't know whether it's been publicly announced, but I just haven't followed closely enough. But let's assume that the military wanted to use Google technology for making drones able to more [accurately] identify targets. Many Google engineers said, “We don't want that. We don't want our technology being used for weapons of war.” The counterargument would be, “yes, they’re being used for weapons of war but they're being used for weapons of war that are likely to reduce civilian casualties.” If you have a drone that is more accurately able to identify a terrorist target, you are able to fire on him when he's not at a wedding, right?
That would be the counter argument. There's less collateral damage, and then of course you have the counterargument which should be: okay, well once you once you make a weapon that has less collateral damage, you have reduced the moral barriers to use. Therefore, you lead to more use of it. And you can continue this back and forth for a while.
Your question is: ‘Why did so many Google engineers not want to do this?’ I would say well Google management probably did. A lot of people at the company, probably did both for business interests and because a sense that it would reduce collateral damage and see just a general sense of nationalism in the United States and you do want the United States as United States citizen you can make an argument that you want the US military to be stronger and that would be in general probably the side I would take depending on what exactly the facts were.
But I think the argument that mostly engineers who threatened to walk out [made] was just like “We understand that there's a military, we understand United States has one. We don't have a way of opting out of that. We're not going to not pay our taxes and go to jail but we don't want to have any part in playing any role in that. I can respect that decision, though I think if I were a Google engineer, I would not have made that same one.
So I'm an optimist. Like I really think that in the end AI is the technology that makes people effectively smarter. It allows us to learn from the past because it records data about the past and analyzes it, projects into the future. And I think it's inherently good to make people smarter. (If you don't believe that, you kind of have to argue for ignorance: ‘Now, if people were less smart we'd all be better off.’ And I don't believe that.)
What we've been doing in this conversation is exploring some of the tradeoffs. How do you net it all out though when you think about AI and you think about there's risk to civil liberty and there's a lowering the political cost of armed conflict, and there's a numbing of the… maybe there's a washing one's hands of the decision to kill other people and so, we've talked about this. When you net it all out, all the good things this technology's gonna do, do you still come away positive about the future, or you don't think it can't be known what's going to happen, or are you more concerned than not? How do you net it all out?
Yeah, I net it all out as positive, right? As a general principle. I feel like, how ever we measure human happiness, human success, our ability to live on the planet, I think that AI in general, will be a good thing. Obviously, it will cause lots of economic disruption. It will lead to all kinds of complicated moral issues. It will change geopolitics, but in general, I feel like one should be optimistic about what it will do. I also think that the effect that AI will have on society can be much better if it develops in certain ways.
One of the things I think about… this is something you probably have a view on, is AI ultimately a centralizing technology or decentralizing technology? Will AI be something that makes it easier for me to start a company and develop an interesting company and hire a lot of people and grow the economy, or will it be something that helps nonprofits and civil society and small organizations, or is it going to be a centralizing technology? Is it going to be something that will just accrue more power to large corporations, powerful governments, institutions that have massive compute powers that have access to internet data and will it in fact become a centralizing technology that has sort of centralizing network effects? So as the AI gets better, it allows the large central organizations to collect more data, which allows them to get even better AI. And that bad is something to worry about.
Okay let me turn the question around slightly which is: which column you put the Internet in?
For a long time, a decentralizing force, right? And it certainly reduces the start up costs to create new businesses, in communicating to reaching new customers, and yet one of the most frustrating things about the way the internet has developed is the way it has entrenched monopolies, and the way it has aggregated power in a relatively small number of companies. It's one of the interesting things is that in companies that mostly deal with atoms you have a lot of competition problems. There is no second search engine. Facebook controls all the growing social networks, but the companies that deal with bits there's a lots of competition and theoretically you might actually have thought that because startup costs are so low in the world of atoms you have more competition and more disruption as opposed to the other way around. So, net/net on the Internet is it centralizing, is it decentralizing? I would say, probably net/net it's been decentralizing but there are very unnerving centralized elements of it.
You know, I'm not sure I agree with that. Well I mean obviously, only in degrees, but if I think back to the ‘70s, you had your choice of any three channels you wanted. You had your choice between three channels and the news that you watched at 5 o'clock on one of these three channels, that was the world, and you had one or two local newspapers and there was no way for any dissenting view to get out of that. There were people who xeroxed manifestos, mailed them around the people and you would read some libertarian whatever, but that was a really… Now that doesn't exist anymore, right?
I work in media. I know that so well. That’s the a huge centralizing force in media and information, 100%.
And so, I think that dwarfs everything else...
So you say that decentralizing elements of media are so substantial that they outweigh the centralizing sort of network effects of search from media etcetera?
People get worried about this whole ‘everybody's in their information bubble.’ You can look at that two ways: you could either say, “yeah” or you can say, “no, we were all in one big bubble before” and it was all the same bubble but that doesn't make it any more or less true. And now, people can migrate to a bubble of their choice that sounds more plausible and credible to them. And I think it's hard to paint that as a bad thing.
You've disagreed with me in degree and slightly turned it to another thing where I will slightly disagree a degree by saying if you're talking about the information economy, there's no question that there's been a massive broadening. There's much easier to access information. But I would also say that one of the failures of the internet has been, it has not done nearly enough, nearly a good enough job in this decentralization. Things can still be going in a direction that is good in one way, but there can also be a massive failure in the other direction, right? The fact that filter bubbles are such a fact of life in a world where people have access to all of the world's information with a simple search -- is crazy and is a systems and design failure, I think.
I had this mantra that I try not to read anything I agree with. I'm serious. It's like I consider that a waste of my time. It's like if I read it and be like, “Yep, that's just exactly what I thought.” I had no trouble finding like a world of stuff I disagree with. And so in the end it's people's choice more than the technology, right?
Well it's a guided choice, right? I mean, you know technology based on its defaults, based on the way the algorithms work guides you to one or the other, and without question the principal algorithmic guides to the internet are doing the opposite of what you are doing with your own free choice.
Do you think the anonymous internet is inherently a bad idea? Back in the day, the idea was you were always dealing with a person and you would know their name, and so behavior was like a lot different once there is no anonymity. Or do you think the anonymity power of the Internet is essential?
Both. I think that there has to be a place for anonymity. There has to be a place for exploring identities that you're not comfortable with exploring in real life. But I also think that one way I like to look at this question is: If you take all the social platforms and you sort of rank them by how much anonymity is allowed, you’ll find that the more anonymity is allowed, the worse the conversation is, in general.
LinkedIn is basically not at all anonymous because not only do you have to use your real name, you tie it to your professional account and tell all the people who are tied to you. It's very hard to have a LinkedIn account with any network that isn't a real person tied to a genuine account. And the conversations on LinkedIn are pretty civil, pretty informative, pretty great.
You know the next in the queue is Facebook, which has always had… maybe it's not Facebook maybe it's Instagram, but you go down to the bottom of where there's maximum anonymity which would maybe Twitter, reddit where it's incredibly easy to just create a new account where there's certain types of advantages to not having an honest account, but you’ll see that the percentage of content that is hate speech or violent or cruel goes up considerably.
So, I don't think there's any doubt that allowing lots of anonymity allows a lot of these problems to fester. Of course there are trade offs, right? More anonymity you allow, more free speech you allow. And generally free speech is one of great principles of this country and we should support it. But I think in general, the less anonymity you have, the higher the quality of the conversation.
Well, I see we're coming up on our hour here, so...
Can we do another four hours? I really like talking to you.
Yeah it's a lot of fun. Would you like to come back for an encore? You could be my first return guest.
I’ll do an encore. This is fun. We got into some pretty heavy stuff.
All right. Well, how can people follow you?
I'm on all kinds of social platforms. I'm Nicholas Thompson, I'm on LinkedIn, and then most importantly they should read WIRED and subscribe to www.WIRED.com.
Thanks a bunch. And we'll talk to you again.
Thanks so much. That's really fun. Catch you later. Bye.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.