In this episode, Byron and Tim discuss autonomous vehicles, capitalism, the Internet, and the economy.
- Subscribe to Voices in AI
- iTunes
- Google Play
- Spotify
- Stitcher
- RSS
Guest
Tim O'Reilly is the founder of O'Reilly Media. He popularized the terms open source and Web 2.0.
Transcript
Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today our guest is Tim O'Reilly. He is, of course, the founder and CEO of O'Reilly Media, Inc. In addition to his role at O'Reilly, he is a partner at an early stage venture firm, O'Reilly AlphaTech Ventures, and he is on the board of Maker Media, which was spun out from O'Reilly back in 2012. He's on the board of Code for America, PeerJ, Civis Analytics, and POPVOX. He is the person who popularized the terms “open source” and “web 2.0.” He holds an undergraduate degree from Harvard in the classics. Welcome to the show, Tim.
Tim O'Reilly: Hi, thanks very much, I'm glad to be on it. I should add one other thing to my bio, which is that I'm also the author of a forthcoming book about technology and the economy, called WTF: What's The Future, and Why It's Up to Us, which in a lot of ways, it’s a memoir of what I've learned from studying computer platforms over the last 30 years, and reflections on the lessons of technology platforms for the broader economy, and the choices that we have to make as a society.
Well I'll start there. What is the future then? If you know, I want to know that right away.
Well, the point is not that there is one future. There are many possible futures, and we actually have a great role. There's a very scary narrative in which technology is seen as an inevitability. For example, “technology wants to eliminate jobs, that's what it's for.” And I go through, for example, looking at algorithms, at Google, at Facebook, and the like and say, “Okay, what you really learn when you study it is, all of these algorithms have a fitness function that they're being managed towards,” and this doesn't actually change in the world of AI. AI is simply new techniques that are still trying to go towards human goals. The thing we have to be afraid of is not AI becoming independent and going after its own goals. It's what I refer to as “the Mickey and the broomsticks problem,” which is, we're creating these machines, we're turning them loose, and we're telling them to do the wrong things. They do exactly what we tell them to do, but we haven't thought through the consequences and a lot of what's happening in the world today is the result of bad instructions to the machines that we have built.
In a lot of ways, our financial markets are a lot like Google and Facebook, they are increasingly automated, but they also have a fitness function. If you look at Google; their fitness function on both the search and the advertising side is relevance. You look at Facebook; loosely it could be described as engagement. We have increasingly, for the last 40 years, been managing our economy around, “make money for the stock market,” and we've seen, as a result, the hollowing out of the economy. And to apply this very concretely to AI, I'll bring up a conversation I had with an AI pioneer recently, where he told me he was investing in a company that would get rid of 30% of call center jobs, was his estimate. And I said, “Have you used a call center? Were you happy with the service? Why are you talking about using AI to get rid of these jobs, rather than to make the service better?”
You know I wrote a piece—actually I wrote it after the book, so it's not in the book—[that’s] an analysis of Amazon. In the same 3 years which they added 45,000 robots to their factories, they've added hundreds of thousands of human workers. The reason is because, they're saying "Oh, our master design pattern isn't ‘cut costs and reap greater profits,’ it's ‘keep upping the ante, keep doing more.’” I actually started off the article by talking about my broken tea kettle and how I got a new one the same day, so I could have my tea the next morning, with no interruption. And it used to be that Amazon would give you free 2-day shipping, and then it was free 1-day shipping, and then in many cases, it's free same-day shipping, and, this is why they have this incredible fanatical customer focus, and they're using the technology to actually do more. My case has been, that if we actually shift from the fitness function being efficiency and shareholder value through driving increases profits to instead actually creating value in society—which is something that we can quite easily do—we're going to have a very different economy and a very, very different political conversation than we're having right now.
You used the phrase a minute ago, that the technology we've created has “hollowed out the economy.” What does that mean?
Well, one of the things that I look at in the book is my history of exposure to various kinds of technology platforms. So, for example, look at the platform wars of my youth—which are now ancient history for many people—the battle between Microsoft and the rest of the world, and the rise of the PC. You know Microsoft was originally this huge value creator, and the personal computer was this hugely explosive ecosystem; it was the internet of its day, much smaller in scale obviously. But [it was] the democratization of computing, where you went from thousands of computers being sold, to millions of computers being sold, and there was this huge ecosystem of software companies. Microsoft became the platform over time, first with DOS, and then with Windows, [that] took control of the industry with software, and they proceeded to squeeze everybody else out, and what happened was, everybody went away to the internet, where there was still money to be made, and we built this next generation of platforms that we see today.
Now we're watching the same story be replayed, as these internet platforms increasingly compete with their ecosystem, and take more and more of the value for themselves. We see this in the broader economy where [in] the profits of the financial industry—think of finance as being a platform technology for society as a whole—you now have 3% of the economy taking 25% of all corporate profits. So, platform after platform makes this mistake of basically killing the goose that was laying the golden eggs. Whereas, what you really want to see in a platform is, consistent value creation for the ecosystem of the platform. In the book, I try and draw this lesson for why, for example, Uber and Lyft have to keep thinking about the drivers not just the customers. We've had this idea that as long as you take care of the customers only, you're good. But you have to take care of your entire ecosystem, and I think that understanding that and really looking at the lessons of technology platforms, actually gives us a fair amount of guidance for how to think about the economy as a whole.
So, I'm trying to follow this. Are you advocating policy or are you advocating just a shift in how we view things?
Both. On the one hand, I'm advocating self-interest on the part of platform providers and changing the narrative about technology. And when I say changing the narrative, I don't mean as in “PR spin” changing the narrative. I mean, changing the narrative around what people understand as the levers of advantage. So for example, when I worked on open source, it was like, oh you used to think that the levers of competitive advantage came from proprietary software and hanging onto it. I have news for you, the levers of competitive advantage have changed. Software is becoming a commodity [and] people are giving it away for free. Originally I was just saying, “something else is going to become valuable.” Eventually, I was able to say, “and now I know what it is, it's big data and collective intelligence,” that was the web 2.0 storyline. So changing the narrative, in that case, was like, “This is how you do it right.” And I think that's what I'm trying to do again. We've had this narrative that said, “use technology to disrupt, use technology to do things more efficiently than the competition, effectively blow them out of the water, and then take over the market.” [But] that doesn't actually take you where you need to go.
I try to tell a story which actually I have centered in a lot of ways, around Uber and Lyft as sort of model companies. I look at what I call the “next economy companies.” They are platforms—first of all, they are platforms, but they're real world platforms, not just software platforms. Things like Uber and Airbnb are managing real world services, access to devices and so on. And if you tease that apart into a business model, you kind of say, “oh, it's a matching marketplace.” So what's really interesting about on-demand is it's a marketplace of customers and a marketplace of drivers as suppliers, and they're kept algorithmically in balance. So you have this really interesting algorithmic system, which eventually could be AI, [but] right now it's a bunch of smart statistics and routing and algorithms and logistics math. The fundamental, really interesting thing is that the Uber or Lyft driver is what I'd call an augmented worker, cognitively augmented. People usually wouldn't think of it as cognitive augmentation, but of course it is—in the same way that a bulldozer operator, is a physically augmented ditch digger, or the crane operator in a port, loading containers, is a physically augmented longshoreman. Here's somebody who’s doing something that was previously impossible. Before, with a taxi cab, you had to stand on the street, and they'd come by, or they wouldn't come by, and you were out of luck unless that happened, or you could call and maybe they could dispatch someone. But [then] there was this magic where we now have this new capability of GPS, and it's been around for quite a while, but eventually through a series of circumstances, Travis and Garrett Camp figured out that “Wow, we could actually summon the car to any arbitrary location, because both the driver and the passenger know where they are. They [both] have this new sense, right?” So that's the first level of cognitive augmentation.
Then, Sidecar and Lyft figured out the other piece of the equation, because Uber was just black cars. They figured out that in order to have enough drivers to really fill out the marketplace, other than a small segment of well-off people, you'd get ordinary people to supply their cars. And you could do that because those drivers are cognitively augmented. It used to be that you had to be a professional driver, because [when] somebody says, “I want to go to such and such an address,” you'd need to really know the city. You [would] need to have a lot of experience to know the best routes. Well, guess what, with these apps [like] Google Maps and Waze, anybody can do it. So I started looking at that and [saw that] we have a marketplace of small businesses managed by algorithms to help them match up with customers. The job of the platform is to augment those businesses to help them be successful.
Then I said, okay, “How well are Uber and Lyft living up to this?” They’re treating their drivers as commodities. They start talking about, well once we have self-driving cars, we'll be able to get rid of the drivers.” Yeah maybe—and they've started to change their tune a little bit—but I said, “Look, Uber, you actually had a much more interesting narrative when you started having a day when you deliver flu shots.” Because the general law—which originally I kind of expressed with Clay Christensen, back in 2004 [where] he called it “the law of conservation of attractive profits,” and that's what helped me get from open source to web 2.0—[is] when one thing becomes a commodity, something else becomes valuable. So if self-driving cars commoditize driving, you have to ask yourself, what becomes valuable. And I think it's going to be new kinds of augmentation for humans, new kinds of services that you'll put on top of driving. And both John Zimmer and Logan Greene over at Lyft, and some people at Uber are starting to talk about this and understand that.
I was in San Quentin giving a talk to members of this group called “The Last Mile,” where they basically train inmates in computer science and this guy [said], “I have an idea for this startup, I want to do when I get out.” He [said], “I used to work in Fisherman's Wharf and all these people making this Uber and Lyft, they can get anywhere, but they don't know what to go see. I think it'd be really great to have a tour guide service.” And you go, “Yeah, a tour guide service on top of Uber and Lyft.” That's just a simple example, but this guy's an inmate that came up with this. If you understand these dynamics—which is that we're going to have a marketplace and you want to enable new kinds of services on your platform—you're going to think very differently than if you think well, this is an extracted platform, where basically I'm going to take all the value for the platform. Because what's going to happen ultimately if you do that is someone else is going to come along and create value for the entire ecosystem.
That's fascinating, so what's the name of the book again?
It's called WTF: What's the Future and Why it's Up To Us.
And it's already out?
It's coming out in the beginning of October, from Harper Business. You can already order it on Amazon.
You know, the pause in my voice is, I'm thinking about nine different ways to take all that and run with it, and I was just checking for available domain names for the tour guide thing. So let's talk about artificial intelligence, and we'll get back to this topic, I think, through this portal that we're about to start going down. First question, define either artificial intelligence or intelligence, take your choice.
That's a tough one because I don't know that we really understand intelligence well enough. If we could define it, we'd be a lot better off, but I tend to think of it as the ability to integrate new unknown information and come up with an appropriate response that was not previously available by rote.
And so what would be something that we have today that would qualify or come close to qualifying as that in your mind?
You mean, in terms of machines?
Yes.
Nothing.
And why do you think that?
I'm with Gary Marcus on this, you know. He kind of talked about how the frontier of AI right now is deep learning, and it's great, but you still have to train it by showing it a gazillion examples of something, and after you show it a gazillion examples, it can figure stuff out. That's great, but it can't figure that out without being exposed to those examples. So, we're a long way from kind of just flicking the switch, having a machine take in its experience of the world, and basically come to conclusions about it. I mean we've got labeled training sets for very narrow domains, and yes, there's [been] some amazing breakthroughs. But there's the famous Yann LeCun, who's the head of Facebook AI research, he said "Look, the goal is unsupervised learning, [but] if unsupervised learning is the cake, we know how to make the icing, we know how to make the cherry, but we have no idea how to make the cake." So, I think AI is amazing, just like big data was amazing before it. There [are]remarkable things happening in the field, but they can't be oversold.
Even if we look at autonomous vehicles, what they're really good at is doing something that's well known, better than any human can do it, and that includes recognizing and responding to certain kinds of known hazards. But they're still very limited in dealing with completely new situations. Now, there are certainly very powerful advantages, for example, one of the reasons why self-driving cars can get better is, you have one accident, and every machine can learn it. But I also think, the biggest problem I see in AI, is that it is still very limited and we tell it what to do. The problem is that we tell it, very often, to do the wrong things, and that's where the real risks come in. I think Elon Musk took a little bit too much heat. I mean he's sort of mixing things when he kind of has these apocalyptic metaphors about rogue AI. When you press him, he says, “Well it's like some AI doing stock market trading at the behest of a human trader that starts a war in order to pump something into those stocks.” That's totally credible, but that's not a problem with the AI, that's a problem with the human direction of the AI.
We're so far from the problem of the AI wakes up, and says, “Oh I think I'm going to go start a war.” Or Look at what is a real risk. We have autonomous weapons— autonomous drones say, weaponized drones—that are allowed to make their own decisions about who to shoot, and there are some real risks there, but we built these things, we told them that it was okay to do this thing. I don't think we're in a place where the Terminator style model one day wakes up and decides, "I'm going to get rid of the humans." That's not the fear. The fear is we're creating new technologies that give more power to people who are going to use it unwisely.
Very often we're setting things up, and we don't understand the consequences yet. I mentioned the Mickey and the broomsticks problem. We don't quite know what we're asking for, and I see a lot of the problems in our economy fall into that category when we're saying, “We had this theory, make money for shareholders, and it'll be great for the economy.” So globalization went unchecked. Technological unemployment went unchecked. Then we kind of suddenly wake up, and we have this populist revolt because you have a whole lot of people left behind and they said "What about us? This was great [but] you've been managing the economy for the benefit of wealthy people, and we're not so happy now.” There were some untoward effects, and so, again my goal here with thinking about AI is, how do we make better choices in what we tell it to do? As I say in the book, we're building a race of Jinns, the spirits of Arabian mythology who do exactly what you tell them, and if you don't phrase it quite right, they can really screw their masters. And that's the worry; we have to figure out what it is we're really telling them.
Well, but that's not a particularly new problem, right? Technology by its very nature magnifies what humans are already able to do. You can make the same argument [about] the television. Yes, now a million people can see, but what are you going to put on it? What are you going to say? You can say it with the A-bomb [too]. This is a really old problem, and what we generally do, of course, is we imperfectly plow our way through. So are you arguing that AI is somehow a different thing than television or A-bomb?
No, I'm saying that it's the same thing, and everybody should stop pretending that it's fundamentally different. I had this really interesting conversation with Charles DuHigg, who's the author of a book called The Power Of Habit, out when we were in Aspen Ideas Festival recently. Charles and I [had]a conversation about my book; it's actually online. But, I had been to his talk about The Power of Habit, and he was talking about nail biting. There's been a lot of neurological studies on nail biting and what drives that habit. The point is, it's something that activates a habit, it's some stimulus, and people tend to bite their nails when they're tense because the pain of biting your nails actually drives out the tension.
I was thinking about that, and I think all this fear of AI is nail biting because there's this deeper tension going on at work in our society, where we know there's something wrong. We don't really know what it is, and so, we're kind of inventing monsters. In the same way, if you look at the history of Japanese horror movies, that was [their way of] dealing with the atom bomb. It was like, “Yes, there's a monster coming, and it's destroying Japanese cities,” you know Godzilla or whatever. It's all summed up beautifully by this great line from the psychiatrist R.D. Laing who was very popular back in the 70's. He had this wonderful line in one of his books, it was "a psychosis is not a disease, it's a cure." It's somebody's attempt to adapt to something that they can't handle.
So your thesis is we're worried about rogue AI because we're tense about, what exactly?
Well I think we're tense about the state of technology in our lives. You know, there's something awry. We're not comfortable with where the world is going. I was applying it more specifically to the ways that the economy has become radically more unequal, and radically less fair, but that's not the only thing. We're worried about privacy. We're worried about distraction. We're worried about the pace. We're worried about being manipulated and all these fears that we don't quite have a handle on. Imagining the rogue AI is the monster-movie version of all these fears of current technology.
Well maybe that is the case, but I've actually thought that the concern stemmed more from... So it used to be we had this technology, like a carburetor, and if somebody handed you a carburetor and you fiddled with it, poked at it, and did all this stuff with it, you could kind of more or less figure out what it does. Then, you have a whole new kind of technology like a smartphone, and you take the back off that thing off of it, and you look at it, and you can't for the life of you figure out how it does what it does. Then you get this newer technology, this artificial intelligence, and you just hear snippets of things, where people will say it will make a suggestion, and maybe we don't even know why it made that suggestion. We can't even figure out why. That feels like a whole different kind of thing.
I don’t know. Again, I feel like that's somewhat overstated. It’s true, but it's true in a way that many other things are true. There's probably a huge number of things that we have made very, very productive use of that we didn't fully understand. Like you take drugs. What's the actual effective pathway by which some drug performs its magic? You know they're still studying some of these things. They don't really understand it, they just see, “Hey, we did this thing, and it worked.” I think people were flying before we fully understood the physics of aerodynamics. If we'd fully understood it, we would have done it right from the get-go, right? You learn by doing it, and I think we're just in the early stages of that learning by doing, and yes, when AIs come up with unexpected results, people study the hell out of it. If we think about what's happening as a result of the AlphaGo victories, they're understanding the game of Go in new ways, and when it did these unexpected moves, that nobody understood how it got to them, everybody's going, “Wow!” They're kind of understanding the game more deeply, so, again, I don't think it's profoundly different.
[Of] the two big arguments, that's one of them. Algorithms that produce results that we want to be able to interpret [is] a real issue, we have to be able to go back and understand, well what [will happen] when we put in different kinds of data. But the fact is, putting data through a process and understanding what's going on in a black box is true in a lot of areas. We have to solve for that. The other fear, of course, is the rogue AI fear which is the Nick Bostrom “superintelligence” notion, you know these things will bootstrap themselves and once they get to a certain kind of activation energy; it's kind of a runaway reaction. Again, I'm with Andrew Ng, who said worrying about that is like worrying about over population on Mars... We're so far from it. You'll notice that the people who express that fear, are often...
Coders, or not the coders.
The people who express it are the scientists, the entrepreneurs, you know so, it's Bill Gates, it's Elon Musk, it's Stephen Hawking, it's not the people who are deep into AI research - they’re like “No, we're nowhere near that.”
Well let's talk about that. That's really the interesting question to me, which is why we're not. So you made a really interesting comment at the very beginning of all of that, you said that we're nowhere near being able to flip a switch where the computer takes in the experience of the world. My first question is can a computer actually be sentient? Can it experience the world? Or were you using that metaphorically?
Well I mean there is a really interesting set of questions around whether intelligence needs to be embodied. This has been a subject in science fiction, as well, for decades. Does it need to be embodied? Does it need to have the risk of death? There are certain kinds of things that seem to spark evolutionary leaps. Do I believe that we will never get there? No. I think about the complexity of the human brain, and I say, “Okay, assuming that we got to a machine with equivalent complexity, do I believe it could be inhabited by a spirit that for all intents and purposes was the equivalent of a human?” Absolutely. Do I believe that that thing could evolve faster than humans? Absolutely. But we're just a long way from that, you know. And we don't actually have a theory of how we might get there.
It's worth talking with Gary Marcus on this. He's a neuroscientist, but also in AI, he just sold his company to Uber. He's like, “Look, we have the illusion that computers can read, but they can't read. You can't give them a body of text, and have them understand it, and draw conclusions from it. They can't map it back to reality.” So they can do certain things way better than humans can. We basically created the illusion that they can do these things, but they really can't. The thing that's going to be really interesting is how we pair humans and machine because machines can clearly do things way better than we can. Now, that's why I think the future is in augmentation of humans, not that we really have to be worried about machines replacing us.
Do you mean physically augmenting our physical forms or you just mean kind of working together?
I think we will eventually get to actually augmenting ourselves and decoys. I remember many years ago thinking, “wouldn't it be awesome to be like a passenger pigeon or any of these birds that seem to have this magical ability to know where they are at any time, you know?” They can do these 11,000 mile migrations and end up exactly where they wanted to, across open ocean. How magical is that? And you go, “oh wait, we can do that now. It's not in our brains; it's in our pockets.” Will that eventually be in our brains, I think it's quite conceivable. I mean there's big breakthroughs happening in brain machine interfaces. Do I believe that we'll have new senses? Absolutely. Do I know what the interface will be, how much of that will be brain muscle interfaces, how much will be optic? I have no idea. There's going to be all kinds of interesting breakthroughs there.
The other thing that's super interesting, I just heard a talk by George Church, the famous geneticist, and he was making the case that everybody who's talking about this race with the machine, assumes that biology is standing still. He's like, “no actually, at this point, biology is on a faster Moore's law curve than machines.” We're at the point where we're starting to have amazing [breakthroughs]. You know, I met with somebody else, the guy who basically funded the technology that became Illumina, the gene sequencing company. He [said], “we're on a path where we'll be able to synthesize a complete human for $10,000.” There's some stuff coming out of biotech that is going to be utterly mind blowing.
Well, let me ask one more question along the same lines because you know, you mentioned the complexity of the human brain, and appealing to that as the reason we don't understand human intelligence better. But of course, the situation is far grimmer than that, in that there's, the nematode worm, their brains have all of 302 neurons, right? And they're the most successful creature on the planet. Ten percent of all animals are nematode worms, but people spend 20 years in the OpenWorm project, trying to model those 300 neurons and make a nematode worm, and even after 20 years, there’s people in the project who say, it may just not be possible. So what do you think's going on? Do you think that perhaps neurons themselves are as complex as super computers and that this really isn't going to ever yield fruit? Certainly, if you can't do the nematode worm after 20 years, what does that tell you about the nature of animal intelligence?
Well I guess I would say on that, is the amount of time that we have not been able to do something is no indication that we will never be able to do it. I mean, we split the atom which seemed unthinkable.
No, but it's obvious that it's harder than we thought it would be. Like at first glance, you would think, “Oh, 302 neurons, surely we can figure out how those come together to make a worm.”
Yeah, I understand, but again I think there are step changes in science with new kinds of technology that do let us see more deeply, do things more deeply. I mean yeah, we thought we were there with the complete [human] genome, and suddenly we realized, “Oh no, it's actually gene expression,” and there’s all this much more complicated stuff, and that, it's going to keep getting harder and more complicated. But we’re also bringing more power to bear on the problem. I guess I think that there are step function changes in our ability to do things and to see things, and I think we certainly can get to a step function change. Now, again, you could make the same case about AI. My point though is, we might one day do that, so, therefore, let's be scared of it now. We need to be prepared to regulate dangerous technologies. The thing that I worry about is that, when you do that, you have to be worried about the right things, and it's very rare that we have enough foresight to worry about the right things.
Your whole thing with WTF is, we're telling the computers to do the wrong thing. Isn't that the same?
No, because what I'm really saying is, we have to actually think about what it is we want. Because to me, anticipating and countering the worst fear is not actually a very productive way to go about life. What you really want to do is to say, “What do we aspire to? Who do we want to be?” So, I'm really interested in framing the questions around life, liberty, and the pursuit of happiness for everyone. How does technology help us do that? How does it make it a better world, for everyone? Not just for a few, but for everyone? And if that's our goal, how do we think differently about technology? If we think our goal is, “well, let's disrupt,” [then] our goal is for our company to make a boatload of money for ourselves and our investors, and we don't really give a shit about what else happens. You know, that's not where you get the right governance structures, and to me, the role of forethought is not to anticipate and correct all the possible wrongs. It's actually to create boundary conditions and incentives that encourage the right things, and to create feedback loops, where you identify things that are going wrong, and correct for them. This is obviously clearly true in something like computer security, as well. Sort of like [how the thought], “well we'll build a big wall and then nobody will be able to get through,” clearly has failed as a strategy. It's how do you become more adaptive? What makes systems adaptive? How are we going to live with the changes that we create as a society through technology and what does that look like? And how are we going to make sure that the things that we do don't increase the potential for violence, for inequality? Again, what are our values? Because ultimately, the technology that we build, will be shaped by our values, and that's the kind of choices that I'm concerned about.
But our values really are the net expression of all the individual choices everybody makes, right?
I know, but what we don't understand, is the extent to which those choices are framed for us. Again, this to me is one of the lessons that I take from computer platforms. If you think about the internet, and why it became such a roaring success, was it had, what I call an architecture of participation. It was designed with very simple protocols, with very little central control, with the equivalent of the golden rule, the Robustness Principle, as Jon Postel expressed in RFC 761 was, “Be rigorous in what you send out, be liberal in what you accept from others.” It's like the frigging the golden rule right out of the Bible, that's kind of what made TCP/IP work. It's like, okay, we're going to ask everybody to send out packets in a very regulatory format, but if you get a bad one, don't go all crazy. Same thing with the web, if you look at the original hypertext design ideas that came out of Xanadu and Ted Nelson, they were all, “This thing has to be rigorously tightly controlled, and everybody has to do this,” and the web was like, “You just link to somebody else, and if they go away, you just throw up an error message.” They figured out some good heuristics that turned out to be very productive of a randomness and innovation within a set of constrained agreements if you like.
I think there's something really interesting about communication systems that depend on agreement protocols, and that was what was interesting about Unix and Linux, to me, as I originally came across them. They were communications oriented operating systems, where it was like, “Okay, programs are going to agree to put out data in this format, and they can send it in the same format.” So, anybody could write a program that could be the other end of a Unix pipe. TCP/IP [are the] same thing, you could build anything, you just don't build it into the core, you build it out at the endpoints. And so, we have to figure out what those principles are for the systems we're building today. How do we figure out what are the core architectural principles that will allow for continued innovation? That will allow for there to be real competition? And there's a lot of lessons in life.
If you build systems where it's really easy for one party to consume all the resources, they'll eventually make everybody else go extinct and then they'll go extinct themselves. Right? So, when I think about how does this apply to policy, I say, “okay, great, platforms start to trade against their participants.” Platforms are a new part of the economy. There's a lot of interesting economic work on this—superstar firms drive much bigger returns and that’s a driver of inequality—in the recent papers from Autor [and] some other ones out of the OECD, [that speak] on inter-sector productivity gains. So you go, “these guys are going to consume their ecosystem.” The new antitrust is not how do we make sure that these guys are competing with each other; they want to make sure that if you're a platform that you're not competing with your ecosystem. This is the Google/Yelp! Problem. Google used to point to Yelp! and then it's like, “Well, we think we could do a better job by delivering these services ourselves.” That's actually a dangerous path because, effectively, once Google consumes the entire ecosystem, the whole thing goes belly up.
So, switching gears just slightly here, that's kind of a vision you had for how you think we ought to kind of collectively proceed down into this world—that we have a discussion about what our values are, and from that discussion, we get best practices that allow us to kind of shape the world in a more deliberate way than what's going on now. I get all that so, that's kind of the idealistic half of it. What do you think's actually going to happen? The real fear people have about AI, of course, is that everybody reads a headline that says, “I don't care what you do for a living, but computers are going to be able to do it.” Fear is pedaled to people in a way that makes them afraid for their livelihood, their ability to provide, take care of themselves, and so forth, and in there, there's like three completely different narratives, that come up around that one.
I think that the ultimate constraint on that, is not whether or not computers can do everything. It's that if computers can do everything, and we don't find some other way to pay people, [then] there's nobody to buy what the computers produce, because an economy is an ecosystem. And this is what's driving[this]. It's really the reason why we're in the beginning of a big economic counter swing. It's mostly been focused, not on tech, but on globalization. But the same thing applies to tech. At some point, you go, well, who are the customers? We kicked the can down the road by just telling, “Well the customers can keep borrowing money,” and so we kind of did it with debt. Then it was also, “Well, the customer can have two-income households.” So, there were all these things that have been kicking the can down the road over the last 40 years, and we're kind of running out of runway.
It's like we're going down the path where all these people who are very unhappy, because this wonderful economy that's made a lot of people really, really rich, has made a lot of other people much poorer, and those people are basically getting their revenge. I mean, Andy McAfee said to me over breakfast one time, he said, “The people will rise up before the machines do,” and that's what's happening today. The ultimate constraint is the fact that the people who are being put out of work, will eventually say, “Screw you.” They're going to take the government, and they're going to basically start penalizing it, often in a very bad way. So, the real risk to me, which is far greater than the risk of AI taking over, is the risk of populist autocrats who talk the language of “I'm going to look after you and your interests,”—which is exactly what Trump is doing, what Maduro is doing—but in practice are looters themselves. That's a pathway that a lot of countries have gone down, and they go down that way until there's a real revolution, and I think that there's far greater risk of war and revolution, than there is of rogue AI taking over.
So you said earlier, and I just want to be clear about this, you believe we're already experiencing technological unemployment. That technology has made people unable to be employable?
No, not exactly, what I've said is, well first of all—
Let me ask just a straightforward question. Are rank-and-file people in the United States as an example, you used 40 years ago, are they better off today than they were 40 years ago?
No, what I'm saying that a lot of them are not, but I'm not saying that that's purely due to technology. It's due to a lot of choices that we've made. You know, in a lot of ways I think of financial markets as the first rogue AI. We've basically built this giant economic machine, and turned it loose, with our blessing, to basically rape the rest of society. It was doing it before, but it's just got a lot better at it, partly through technology.
You know the original definition of technological [unemployment] came from Keynes, and it was the inability of society to adapt to technology quickly enough. In other words, it's a temporary phase. People kind of go, “Oh technological unemployment, that means everybody gets put out of work.” I've seen this again [in] my career [regarding] open-source software. Jim Allchin [says that] open source is an intellectual property destroyer. All that people could see was that open source software was going to destroy the big proprietary software companies, and make this super lucrative industry, not lucrative. They didn't understand that it would make possible Google, and Amazon, and the like and that there would be a future of companies that were just as lucrative, or more lucrative, who leveraged this and found new levers of competitive advantage.
So, there were a lot of people who found technological unemployment in one industry, and we're seeing this today. Big enterprise software companies being turfed by cloud companies; that's technological unemployment, and those people have to adapt. When I look at something like, people being put out of work in factories because the factories are more efficient, what I say is, “Okay, what did we miss?” You know, it's not that we put the people out of work, it's that we didn't do new work to make up for the work that was now more efficient. That's how I've kind of developed my book. I really came to see the changing values around what businesses are supposed to do. Look around. There's a lot that needs to be done. And if we were doing the things that needed to be done, we'd be putting everybody to work. So the question isn't, “Does technology put people out of work?” It's, “What is causing us to use technology to put people out of work, rather than putting them to work solving new kinds of problems?”
But that all feels just like semantics, I mean what happens of course...
No it isn't at all, literally...
Because what happens is that as an industry [is that] you can either top-down do it, and say you are now going to be a baker, no we don't need bakers now—
No, I think that's totally false...
I mean we have full employment. Unemployment in this country is 5-10% for the past 200 years, with the exception of the Depression, and that's against a backdrop of a rising standard of living. And that's year after year after year and somehow—
Actually, the standard of living is not rising anymore for a large number of people. Here's the thing that I would just say about this though. What you have to understand is that the amount of money that is being invested in the real economy has been going down. A huge amount of the money that “is being made today,” is being made through, the financial equivalent of fake news. Eight-five percent of all “investment" is [an] investment in driving stock price, and then profiting from the stock price. Only 15% of corporate profits get reinvested in actually growing businesses, in hiring people. When American Airlines tried to pay their people more recently, it was seen as they were robbing the shareholders, right? There's an ideology at work that has basically gutted the economy. It's not actually that people are being put out of work by technology, it's that we basically said no. Actually, it was a straight line my mom had once about Bill Gates, before he was a humanitarian, when he was the rapacious capitalist, she said, “My, oh my, he sounds like someone who'd come over to your house for dinner, and then say "hmm, I think I'm going to have all the mashed potatoes." And we have a set of people— Martin Shkreli was a great example recently—who came to the economic dinner party and said, "I think I'm going to have all the mashed potatoes." That has nothing to do with technology.
The point is that we have crumbling infrastructure, we have people who are being paid less, whose wages have not gone up with productivity, we have people constantly trying to cut down the amount that they spend on labor and using technology to do that. We've actually created incentives within the system. We’ve told CEO's, “your pay should be tied to stock price.” That was a new innovation; it was a bad innovation. We did preferential tax treatment for money that you make through capital appreciation, and we actually charge more for money that's made through labor. So, all kinds of crazy things that we do, where our economy is organized around the needs of the financial industry and the owners of capital. I really think that we'll look back on this period, where it's almost like we used to laugh that we used to have the “divine right of kings,” but we're living in the age of the “divine right of capital.”
Realistically; we should be applying this technology that we have to solve the world's problems. We have massive problems heading our way, and I think climate change is going to be the thing that gets us out of this malaise, in the same way that World War II got us out of the great depression, because we're going to go, “Oh shit,” and we're suddenly going to gear up and start working to solve real problems, rather than just having a bunch of fat and happy capitalists taking as much out of the system as they possibly can. Either that or we're going to end up with some revolutions, and again, I don't think we have to go that way.
I think the two things we need to do, are: one, we need companies to realize that it's in their self-interest to put this technology to work, making things better, and I see companies, like Amazon, Tesla, they are working on "How do we actually do more with this technology?" They're using capital markets correctly, which is to fund impossible futures. How do we start understanding that these technologies need to take their workers into account, not just their customers? So [that’s] some fresh business thinking. We also need policy innovations, because we basically have to understand that more of the money has to flow to people, not just to the owners of capital, because it doesn't work for our society as a whole if you have a small number of very wealthy people and a large number of people who are hopeless. Europe has historically done a better job of that, Germany in particular. People kind of go, well it's capitalism versus communism, and no, it’s one version of capitalism, and our version of capitalism right now, is not a very good one. After World War II, we look back on that prosperous era; full employment was the goal. It wasn't share price appreciation.
So what do you actually think is going to happen? Or are you saying it's indeterminate at this point? If you look 5 or 10 years out, paint me a picture of how you think this is all going to unfold, because you've got a lot of apocalyptic stuff mixed [with] climate change, crumbling infrastructure, hollowed out the economy. Then, you have a lot of hopeful sounding stuff and, I’m kind of going back and forth trying to draw a bead on how all of that nets out.
Yeah, I guess, that's the subtitle of my book, it's up to us. We have choices to make, and again I'm very hopeful, because I look at all the entrepreneurs who are trying to invent better futures, and there are people who are sitting there, wrestling with hard problems. There is this poem I like to cite by [Rainer Maria] Rilke, where he talks about Jacob wrestling with the Angel. Now he didn't think he'd beat the angel, but you become much stronger by the fight. I look at those entrepreneurs [like] Jeff Huber at Grail. His wife dies at cancer, and he's like, I'm going to make a blood test for early detection of cancer, and, he's raised $100 million—this crazy-ass goal of using technology to basically tackle a problem that's really, really hard. That's the possibility of the future. Zipline is another one I cite, in my [book], [They use] on-demand drones to deliver blood medicine in a country with no infrastructure.
It's like, “Wow, we have all this amazing new technology, we can solve a problem, in a country where the leading cause of mortality is blood loss, maternal blood loss, maternal hemorrhage, after birth.” And you go, “Wow, we can solve that problem.” It's like, all around the world, there are problems to be solved. I rant at the people who are like "well, we want to build the new hi-tech super city." I go, “Guess what there are 20 million people, refugees out of Syria, and they need a new city more than some hi-tech bros need a new city for them to enjoy.” If you go solve that problem, you'll actually build the city of the future. And so, I think, again my role in this industry has always been encouraging people to work on things that are hard, that make the world a better place, and to use the latest technology to do that, and that's kind of a lot of what I'm trying to say here. It's like, “Hey, we have a lot of things to worry about, we have enormous new powers, let's put them to work, in the right, way, tackling the hard problems.”
Alright that is a wonderful place to leave it. I want to thank you so much for a fascinating and wide ranging hour, and I hope sometime we can have you back on the show. Go ahead, and one more time tell everybody the name of the book.
It's WTF: What's the Future and Why It's Up To Us, by Tim O'Reilly, and it's on Amazon right now.
Alright, thanks a bunch Tim.
Alright, great, bye bye.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.