Episode 112 – A Conversation with David Weinberger

Byron speaks with fellow author and technologist David Weinberger about the nature of intelligence artificial, and otherwise.

:: ::

Guest

David Weinberger, Ph.D. explores the effect of the technology on ideas in his books, classes, and talks. He is a senior researcher at Harvard’s Berkman Klein Center for Internet & Society and was co-director of the Harvard Library Innovation Lab, and a journalism fellow at Harvard's Shorenstein Center. Dr. Weinberger has been a marketing VP and adviser to high tech companies, an adviser to presidential campaigns, and a Franklin Fellow at the U.S. State Department.

Transcript

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I'm Byron Reese. Today my guest is David Weinberger. He is the guy that likes to explore the effects of technology on ideas. He's a senior researcher at Harvard's Berkman Klein Center for Internet and Society, and was co-director of the Harvard Library Innovation Lab and a Journalism Fellow at Harvard's Shorenstein Center. Dr. Weinberger has been a marketing VP and adviser to high tech companies, an adviser to presidential campaigns and a Franklin Fellow at the US State Department. Welcome to the show, Dr. Weinberger.

David Weinberger: Hi Byron.

So, when did you first hear about AI?

Well about AI...

Well was it 1956?

That's what I'm thinking.

But you were only six then. So I'm guessing it wasn't then.

Well as soon as the first science fiction robot movies came out, that's probably when I heard about it. Robby the Robot, I think.

There you go. And so I don't know if we called it that colloquially then, but in any case, how do you define it? In fact, let me narrow that question a bit. How do you define intelligence?

Oh jeez, I seriously try not to define it.

But don't you think that's interesting that there's no consensus definition for it? Could you argue therefore that it doesn't really exist? Like if nobody can even agree on what it is, how can we say it's something that's a useful concept at all?

Well I don't want to measure whether things exist by whether our concepts for them are clear, since most of our concepts are ultimately, when you look at them long enough, -- they're not clear. Words have uses. We seem to have a use for the word ‘intelligence’ as intelligent as opposed to something else. It's usually really useful to think about the context in which you use that word or another one. And even though we are not [doing so], define ‘life,’ right? It's a pretty useful term especially when you're talking about whether something is alive or dead. You know you don't have to be able to define life precisely for that term to be useful. Same thing with intelligence, and I think it can often be a mistake to try to define things too precisely.

Well let me ask a slightly different question then. Do you think artificial intelligence is artificial because we made it? Or is it artificial because it's not really intelligence, like artificial turf? It's not really grass. It's just something that can mimic intelligence, or is there a difference?

So it's a good question. I would say I think it's artificial in both ways and there is a difference.

Well tell me what the difference is. How is it only mimicking intelligence, but it isn't actually intelligent itself?

Well you're gonna be really angry at me. It depends how you define intelligence. To me that's not the burning question at this point. And I'm not sure if or when it ever would be. Generally we ask about whether machines are intelligent in sort of everyday conversation. Insofar as this, we're talking in everyday conversation about the sort of thing, but it's because we are concerned about whether we need to treat these things in a way that we treat other human beings, that is, as creatures that we care about what happens to them ultimately. Or we want to know are they doing stuff that we do cognitively that's sufficiently advanced that we are curious about whether a machine is doing it. We don't call an abacus intelligent even though we use it for counting. We're a little more tempted to worry about whether machines are intelligent when we can't see how they work.

And I think you hit the nail on the head with your comment about [how] in the end we want to know whether we have to treat them as if they're sentient in the true sense of the word, able to sense things, able to feel things, able to feel pain. How do you think we would know that? If they had a ‘self’ that could experience the world as it were?

So we may get very confused about it. I'm already pretty confused about it. I mean I'll tell you why my suspicion is that they cannot be intelligent in the sense in which they have an inner life. I find that sort of philosophically objectionable, even though I just brought it up, or whether they care about what happens to them. This is not my argument. I don't remember whose it is though.

Is this the Chinese Room?

I like the Chinese Room. It is not.

Okay. The problem of Mary?

I don't know that one.

Keep going. Which one are you going to use? Let's talk about one of these.

Okay. And so the way that you set this up is you imagine sort of a two step thing. So you imagine that your computer is a series of on/off switches, and we know that it doesn't matter what you make the computer out of. We make them out of silicon generally for practical reasons. But you could make a series of on/off switches [out of anything].

So imagine we have a computer that we think we set it up so that it absolutely perfectly mimics the neural states of Ray Kurzweil's neurons as brain neurons during the five minutes when he met his future wife and fell in love, -- absolutely perfectly mimics that and seems to be able to respond to questions and the like. So now take that out of the ordinary computer. We know we can make a computer out of anything we want, not just out of silicon. So we're gonna make one out of beer cans and the beer cans if they're right-side up, that's an on. And if they're upside down it's an off and you're gonna hire a billion grad students. You're going to give them a series of timed instructions about whether they should turn their beer can up or turn it upside down. What they don't know is they are absolutely perfectly mimicking what's happening in the computer that we suspect might be Ray Kurzweil's consciousness. So they do that.

And so the first part of this I think disproves or attempts to disprove the possibility of this sort of artificial intelligence that is actually has achieved consciousness is that first of all it's absurd that a series of beer cans being turned right side up and upside down could be conscious. Second of all, the second part of this is to say and if we do think we want to say no all those beer cans even though they're distributed around the globe and that must be Ray Kurzweil's brain actually experiencing this, is to say “OK well then let's just change... Everything's exactly the same except we say that instead of a beer can being upright is on we say it's off.” It no longer mimics those states which means that the same phenomenon, the same thing: these beer cans being flipped up and up and down both is Ray Kurzweil falling in love and simultaneously, in exactly the same way, it is not Ray Kurzweil falling in love. And I'm going to go to Aristotle and say one of the most basic laws is that a thing cannot be and not be at the same time.

If that's the case, how is it that people... you could have a similarly reductionist view of what goes on in our brain?

You could.

And yet we are conscious.

So we could have that reductionist view. But I think we're only tempted to that reductionist view because we first of all view the brain as an information processor. It's not. It's an organ. You can simulate exactly in a computer exactly what's happening with digestion, but there's no sense in which that's digestion, and you can't go back and say “well, but then how does digestion work?” because it's not. The point is that computers are symbol processors and for a symbol to be a symbol, somebody has to be intending it to be a symbol. Human brains are not symbol processors.

Right. So let me ask a different question. Is the human brain a machine?

Define machine. I think essentially, no.

So I'm going to say you’re exclusively governed by the rules of physics.

Sure, that means everything is a machine.

Well, right. So I mean if it's governed by the laws of physics, if it's just atoms [that] obey these laws, then why can't you build a machine that duplicates its function?

You could, but that's if you built it out of the same materials as a human body. It's not just brains. Brains are part of the body.

Why would it need to be made out of the same materials?

Because otherwise you are saying that what makes something what it is is simply its form, not its material.

Well you know this argument though it says: Look, a neuron does something and it is not individually conscious. Probably not. And we could model that neuron, we could build a little... I mean you know the argument. You could make a mechanical neuron and maybe it's the size of your fist, who knows, but it does exactly what a neuron in the brain does. And if you bolted enough of those together, it would be a brain, or why wouldn't it be?

Because it's not doing exactly what a neuron is doing. It's a symbolic representation of a neuron. We only say it's doing the same thing because we have intended it to be taken that way. A neuron is organic. It does not come down to being on and off. Being on and off is not the same thing as...

Well no I'm not saying you build it out of computers, I'm saying you get one of those little perfume atomizers and you can squirt it with chemicals and you can put a wire into it and give it an electrical charge, and you can put little glial cells around it that are the size of your thumb. I mean we built it, now it doesn't have a computerized part in it. It's just a hunk of plastic and neoprim and Legos and all that.

But then it's not the same thing. You're deciding that two things are the same if they have the same function, but for reasons I can't fully justify, I believe that organic material does organic things and representations of organic material may perform many of the same functions, for example they may transmit signals, electric signals, chemical, electrochemical signals the way that neurons do. But for it to be a brain, it needs to be a brain.

I don't separate... this idea that things are what they are, independent of their material, is in some ways a new idea encouraged by, among other things, the information revolution where everything became information. It's also a really, really old idea. It's Plato, which... and I tend towards Aristotle, who thought that the two things: the form and the matter are not ever really separated. And so I'm a crude organicist, I guess, when it comes to...

So you can imagine a day we 3D print biological brains that are conscious and smart?

I guess. You know I'm not sure how practical that is, but my position does not commit me to saying that the only things that can think have to be born of woman and go through the biological process. If we can create it without the same materials in the same form then yeah, I guess that may be a question for me.

Fair enough, so let's talk about emergence, because a lot of people read the mind and I'll just define the mind as... I'll loosely define it as everything the brain does that seems to be something more than an organ should be able to do, like you know your heart doesn't feel emotion probably, even though you know poetically we say it does, and your liver doesn't have a sense of humor and all the rest. So the brain does these things, the mind [does], that appear to be emergent properties. Is that how you think of the mind?

Yes.

And do you believe in strong emergence? Do you believe that if we studied matter long enough, we could study hydrogen and oxygen long enough and somehow you could eventually say “Oh I see how you put those two things together and they become water. I get it. I can see it all now. It's simple, it's straightforward.” Or there's this notion of strong emergence that you actually cannot see a connection between the underlying components and what they're able to do. You can never decipher it, no matter how smart you are. There's somehow a break of some sort between them.

Yeah I'm not sure. So it certainly seems the case. We know that we can stimulate the brain physically and cause emergent properties like feelings and memories. So I think we'd have to believe that there is some observable and knowable connection between the two. I don't know. Byron, why do you think I know these things? I don't know this.

Well because you are a senior researcher at Harvard's Berkman Klein Center for Internet and Society, you were co-director of the... and on and on.

You know it doesn't mean that my opinions about the philosophy of artificial intelligence have any weight. Don't get me wrong, I find it fascinating. I wanted to have room to say...

Well here's what I think you know is interesting to me is that because where I was gonna go, is we also are conscious; we experience the universe. I don't believe we have a good scientific understanding of how matter can have a first person experience. They call it the last great scientific question we neither know how to pose nor do we know what the answer would look like.

And it does seem that like I can fall in love, but a rock can't and a rock is made from the same atoms I'm made out of. And that seems very interesting to me. So I was going to ask you “What is the method by which we can experience the world?” So I will ask you that.

I'm not sure why the answer isn't: we are embodied creatures living in a world with others. Why isn't that an answer to that?

Well, at some point there was only hydrogen and helium, and at some point it perceived itself and named itself and the brain named itself and it began to experience other things and like how that could happen seems marvelous and wonderful to me. You sound like you think it's all blasé, like eh, we're just people, that's what we do.

It's not blasé. I just don't pretend to have an answer to it.

Right, but it doesn't sound like you're wowed by it either.

Well I'm sorry if I give that impression. I am totally awed by it in fact. I maybe should say a couple of things. One is my background is in philosophy from a long long time ago, but my field of interest, -- the reason why I care about philosophy -- is that I was primarily I think well history of philosophy but primarily a phenomenologist, which the essence for me of phenomenology is Heidegger's version of it... is that you start by looking at human experience and try to find what in our experience, try to understand what it is without initially imposing theories and hypotheses on it, you can do that later. So I don't know how we got here.

I am an atheist. I'm actually an agnostic because I don't know, how would I know? How would I possibly know if there is a God or not. How would I know what life was like before there was life or when we were semi conscious or whatever? All that I know is that we are here, that we are aware of a world, that we are very much part of the body, that our bodily experience and that we know with our hands as well as with our brains; that we are creatures that can only experience the world as creatures within a particular culture and language and a history and a family and a religion and a personal background. All of that stuff that makes us who we are. That's not an accident, doesn't get in the way of our experience or knowing the world. It's the condition for knowing the world. How we got here may well be beyond our capacity. It doesn’t mean I'm not awed by it. It just means that there is a limit to what least I personally...

How can a drop of water comprehend the ocean? I'm with you. Tell me about this book, Everyday Chaos. Why did you write it and what is it about?

I can actually tie it back to I think this discussion because in one sense it's about changing how we think the future happens, and I think we have been sort of silently led to by our experience on the Internet, which is basically an experience of chaos, an experience in which it's uncontrolled. I mean of course everything is somewhat controlled. But in terms of our experience of it, it's unpredictable, it's uncontrolled. Little things can become gigantic things without anybody knowing how, we don't really know how everything on it... But even if you look at your browsing history for the day, you will not be... at least for me, I can't remember how I got to everywhere where I'm going.

So it's just this chaotic, always changing environment, which we are succeeding at. We like it generally. I know there are terrible things about the Internet, but generally we like being on it, we want to be on it. We enjoy this chaos which, second part of what the book is about is about machine learning, which for me the really fascinating thing about machine learning is the fact that it creates models for itself, where it takes in the data and it connects up little points and comes up with a vastly complex model of some domain. We use that model then to run data through it and make predictions or whatever and it works, which is amazing, it actually works.

But at least and in many cases, we cannot figure out how it comes to the conclusion that it does and those conclusions are more accurate than we can do. Which is why often we use machine learning. So here we've invented a technology that is too complex for us to understand. And for me, the really amazing thing about this, besides all the amazing things that it's able to do, surprising and amazing, is that it tells us the conclusion that at least I come to and I think many of us are coming to, is that well it turns out our way of thinking about the universe may not be the best way.

You know we've got long histories as humans in the West anyway of believing pretty firmly that it just happens to be the case or God willed it to be the case that humans were the creatures that are able to understand at least part of the universe. We can understand part of God's creation. For the ancient Greeks, we are the rational animals, which implies there's no point in being rational animals if the universe isn't rational. So we've had this really special place, and now we've invented a technology that understands the world fundamentally differently than we do... that we cannot, in some cases anyway, we cannot understand how this thing works, and yet it does work. And that tells us at least implicitly that maybe we've been wrong about what we assumed was our position in the world to be the knowers of the world. Maybe we don't know the world; maybe evolution evolved the brain that wasn't optimized for truth but for survival. That's a pretty big break in our tradition. So in some sense that's what I mean.

I'll really briefly tell you another way of thinking about what the book's about. We historically have a tendency to understand ourselves, to interpret ourselves in our world through the latest big technology, whether that's information, you know computers and then started in the 1950s when we started to actually feel ourselves processing information and would feel information overload, or the scheme before that when we would feel ourselves under pressure. So if machine learning and AI are the next big sort ofAge of AI, I'm pretty sure that's going to be right, that's the case, then can we start to think about what it will be like? Can we see signs already of what it's like to be understanding ourselves in terms of machine learning? That's what the book is about.

So you wrote a book about this, so help me understand that a little better. So we say we understand the world. We understand how to tell a cat from a dog. And then we write a computer program that studies lots of cats and studies lots of dogs and we tell it if it's right or wrong. And then it can tell the difference between a cat or dog. We feed it data and tell it “yes, you're accurate/you're not accurate.”

So how is that not still us at the top of the food chain and we just built something? Just like we build you know a pulley system to lift more because it's got a bunch of pulleys, we put a lot of pulleys together and it can identify cats and dogs faster than us now?

That's exactly the right question. So thanks. As you well know, but traditionally if you want to make a program that distinguishes cats and dogs. what you would do is not just feed it data, you would tell it look for pointy ears and a tail that looks like this and you know dogs tend to have skinnier legs or whatever it is. You would define properties of cats and dogs; you would give the computer a model of what a cat and dog is. Or easier example: if you're trying to train the computer to recognize airplanes overhead, you would tell it, “you know this type of airplane has a longer wing and this one has a turned up whatever.” You would describe the model as you understand it. If you wanted to diagnose diseases, you would tell it what we know about diseases and symptoms and medicines and body parts; or for business, you're basically given a spreadsheet which says here are the things that affect business, sales, employees etc. etc. And here are the relationships between them and we've been doing this type of conceptual model, we've been doing forever, then you program the computer with it and then it can work. With machine learning you don't give it a model, you give it the data and the labels. You're absolutely right, for machine learning you would label the data and you would say “Here's 10,000 photos of dogs, 10,000 photos of cats. Go ahead and figure out what you think the model is” and what it comes back with will be, “We'll build a model that consists of relationships among lots and lots and lots and lots of pieces of data. Those relationships may be one point to many other points and they have different weights.”

And so you can get these gigantic amazingly complex models that a human wouldn't be able to work through. Then we'll tell you the likelihood of this next image that you feed in isn't a cat or dog. And very likely if you did your work, it would very likely get it right. Well when you ask it “Okay, what about this image, which is a series of pixels?” you know “What about this image tells you that this image is a cat or dog? It may not be able to tell you. It may not have derived the sorts of generalizations that we use, like cats have pointy ears or whatever it is. That dogs have floppy years, cats don't. It may not have any of those generalizations. It may just be this gigantic collection of weighted points. That is not how we think about the world. Yet it seems to be more accurate.

You had me up until that last sentence. Because let's pretend you're a terrible mechanic. And people bring their car in and this one particular car makes a strange noise and you're not a very good mechanic, you don't know anything about cars, so you take this first car that comes in and you pop the hood and you take a hammer and you just whack it in this random spot. Nothing happens and you're like “oh I don't know what's wrong with it.” And the next day another car comes in, and you whack it and nothing happens. And then the hird day, you whack it in a different place and it fixes it. And then the other car comes and you whack in the same place and it fixes it again and again and again and again. Well, you now know how to fix that car. You don't have any understanding why it fixes it or anything like that.

And that's to me, a lot of how people navigate intelligence, and so how is what the computer's doing any different than that, as it were? It just tries a bunch of random stuff and ‘lo and behold’ it finds one strange configuration that happens to predict cats and dogs. And again it can't tell me how, but it's not doing anything particularly mystical or mysterious. It just happened to find a random weighting of pixels and colors and all of these things and I can look at it and say “oh wow that's so inscrutable, it must be so intelligent.” Or I could be like “No, it just figured out the right place under the hood to whack and now it can work.

Very fair point. I think it's doing both. In one sense I completely agree with you: much of what we do in life we do without having an explanation, and sometimes if you look hard enough at anything, generally you can't find an explanation. You look in enough detail, you get down to the quantum mechanics of it or whatever, and you know at some point you say “Okay, that's it. We don't know.” Nevertheless I think there still is an important difference. When you're whacking the car, you're applying a single cause and maybe you don't know what it is, you know ‘bang, that's it’ and you fix it and you don't know why it fixed it.

But we generally have the confidence that we can more or less ask questions of our world, especially [about] stuff that we've built and expect to get an answer. Sometimes the machines are so complex. You know the Large Hadron Collider that discovered the Higgs Boson, which is a huge marvel of engineering and physics. There is nobody who knows... we're pretty confident that the Higgs Boson was discovered, we're confident in that. But there's no one person who can describe…who knows all of the Large Hadron Collider and can explain everything about it, the electrical systems and the magnets...

Nevertheless we have confidence that if we want to know why something went wrong, or how this subsystem works, there are people we can ask. We can interrogate our equipment and we can get answers. Likewise, tap with the hammer you don't know. But somewhere there is an engineer who will figure it out. So in this case, you're doing a single cause, a little tap and you don't know, but other people do.

In the case of machine learning, there may not be a single cause. It may not be a single pixel that determines whether it's some constellation of pixels in weighted relationships. That's all statistical, nothing mystical about it. It's all statistics that gives rise to the probability that this is, yes, it's a cat, and the machine is right or wrong. Nothing mystical about it, but the constellation of conditions that breed the machine to correctly identify probabilistically may be vast and not along dimensions that humans think about things.

So what is in the end...

I think the important thing for me is that we generally do these things by trying to find some general principles, a general model that applies to lots of particulars. And machine learning does not feel a need to generate those sorts of generalizations, which is a really different way of thinking about the world. I'm sorry I cut you off...

No no no... I think of the famous essay I Pencil. I don't remember who wrote it, but it's kind of the quintessential paper about the division of labor, and it says nobody knows how to make a pencil. Nobody in the world knows how to make a pencil, and nobody knows how to mine the clay in West Virginia and make the yellow paint and all of it, the metal thingy and the rubber eraser and put it all together. And yet pencils somehow get made and they are two cents apiece. And so we kind of already function with this kind of collective emergent intelligence as a society -- again, in a way that none of us really understand how it all works.

Well I'm sorry. But I think there's a difference. So the pencil as a great example, is a way simpler example than the Large Hadron Collider one. But it is exactly the same same point, it's a really nice example. The difference however is that we have full confidence... there is no one person who knows how a pencil is made. But we have full confidence that there are people who know how to make each of the elements of it. We can find the painter. If we can find the painter, we can ask “how do you make the paint?” etc...

With machine learning, we lack that confidence at least in some instances. Some machine learning we know exactly how it's working to one degree or another. But some as of this point, we don't. And there's nobody to ask, and the reason that we don't know ist because it's depending upon such a large collection of small points connected in a network of neural networks that have different weightings, it doesn't resolve to the sorts of general principles and ideas that we use in order to say we understand something. We see the principle that explains why this or that happens, why the paint dries the way that it does. We may not have that in some machine learning models.

So where do you go with all of this? Fundamentally it sounds like you believe in cause and effect -- that it's inscrutable now. So how do you live your life if you can't have any connection between your actions and their results?

Well I do believe in cause and effect, so I do think that there is some connection.

But that it’s not knowable... That we can't really know it. That we're kind of set adrift and that we know that at some level it's all cause and effect, but we can't comprehend it anymore.

Yes. So I mean I think a couple of things: one is that I think and actually hope that we are coming to recognize that we too easily assume that life is orderly and non-chaotic. I personally hope that for whatever reason... because it suits my personality, that we all get... If machine learning starts to inform our self understanding the way that the information revolution has back through history, other revolutions [have], then we may be able to pay more attention and give more value and credence to the individual pieces that we encounter in the world, that is, things and the acts of things we write off as accidental, merely accidental. Maybe you will see it's also worth paying attention to, not merely writing off. I think this has very large implications which we're already seeing very directly in how we in businesses think about planning; how what we think about strategy; how do we see progress; what constitutes progress. All of these things are affected by accepting the chaotic nature in which we have always lived.

So what is your takeaway that you hope your reader gets, because it sounds like it isn't: we are set adrift in a world we cannot possibly understand anymore.

Yeah. Because we will continue to make our way through it, as we always have. I do think that there are some differences in how we will go about planning and trying to thrive.

You know I'm working on a new a new book and it's about waste, and the more I study it, I think about World War II and they used to have metal drives because you need metal to make weapons, and people would save the bacon grease because you can make explosives, and they used to ask people to plant a ‘victory garden’ to free up manpower on the farms because you need people. And they used to ask you not to drive as much because the war effort needed rubber. So you could tie every one of your actions to like “okay I see how if I do this or don't do this, it helps the war effort.” What happens is when I'm doing this new book I'm finding I can't... there's so many unintended consequences of things and so many things that, like did you know if you download a movie off the Internet, it releases 15 pounds of CO2?

I never heard that.

Yeah. That's something to think about. And you think ‘there's nothing in my experience that suggests streaming a movie is 15 pounds of CO2.’ But you know it takes a measurable amount of electricity, and you can figure out how much that is; and you can figure out how much coal you have to burn to create that and how much that releases. Now I find we live in this world [where] it turns out to me that I can no longer trust my intuition about these sorts of things. And it does seem to be an ever more complicated and complex world. So what's your remedy to that, or do you have one?

Well I think we're inventing new strategies that take account of the fact, and in fact find benefit in the fact that life is unpredictable. And I think you see these all over the internet already. I mean things like minimum viable products. We're sort of trying to anticipate what your users will want. You launch with the smallest essential features that you can and then you watch what they do, what they complain about, what they're saying. You talk with them, you measure, and then you start adding features.

This is the exact opposite of how we designed products forever. We've always tried to anticipate what people are going to want to do. Well in a world in which anticipation actually turns out to be in many ways a pretty weak way of proceeding, movable minimum viable products are a really good alternative and there's a ton of other things.

I could list some of them quickly that also work because they purposefully refrain from trying to anticipate and predict: open platforms where people can build stuff with your stuff that you didn't anticipate, game mods where PC games allow users to create rules and maps and characters that the game company never thought of, open source, open access, ‘on demand’ everything, Agile development, un-conferences where the attendees make up the agenda when they arrive rather than having the organizers try to anticipate what they're gonna be interested in.

These are all examples in which we are thriving as businesses and as people because we are purposefully making the world more unpredictable, making more things possible. The Internet itself exists in order to make unanticipated things possible. This is a very big change in what has been literally tens of thousands of years in which our basic strategy was to try to anticipate the future and then to prepare for it.

Yeah what's that quote about “enlightened trial and error outperforms the reasoning of a flawless intellect.”

That's a great quote. I don't know who said it.

I don't remember who did either. But you know I think you're 100% on. But just in the interest of the kind of thinking that through, counter examples abound. Let's start with the iPhone. It's like 80% of an iPhone is probably a pretty terrible phone. It kind of worked because everything was perfect when it launched, like everything came together and it had the form factor; it had the shape; it had the beauty; it had the map and the finger thing and all of it. And it wasn't something that..., was it a giraffe, which is a horse designed by a committee, or something like that. The camel, that's the one. So when do the counterexamples dominate? When it's like there is a thing that nobody in the world ever knew they wanted, and it appears and all of a sudden everybody looks at it and says “That's exactly what I wanted.”

So I think the iPhone is actually a really great example for what I'm saying because phones existed, mobile phones existed before the iPhone. They tried to put together the exact right set of features that users would want. And Apple got a whole bunch of things right, but it actually did not try to put together all the features that users want. Instead it built an app store and let other people build things that Apple would never ever have thought of, and if they had thought of, that they might not have the resources to build.

So from my point of view, there are certainly counterexamples. I mean all companies also engage in anticipation and preparing and so do we. Because otherwise you get hit by the next bus that comes by because you don't look both ways. So I certainly wouldn't say we're never going to anticipate the future again. But remember I am a phenomenologist. And in this case what that means is you look to... the examples that we look at that are on, let's say the Internet, that seemed to us to be so remarkable and that are good examples of what you do on the Internet.

So you know like reading a magazine on the internet is really not that much different than reading one on paper, but using open source software in order to build features using some program's open API so that it extends... that really starts to feel Internet-y. Or creative endeavors that start off small and nobody knows where they're going to go and they get built into something either magnificent or something funny or something trivial. Those are the sorts of things that we think about when we think about the Internet that's distinctive of them. And most of those cases are cases where we gave up on anticipating -- where the power and the beauty of the thing is that we did not insist on anticipating.

Well wonderful. I'll close by asking you, when you net all of this out and you do have to anticipate the future, would you say on balance, you're positive, you're an optimist? Because I wonder if you know one thing that's going up is asymmetry and in a sense, the power of a few people to do a great deal of harm. But at the same time, there's this countervailing force that many more people want to create than destroy. And how do you think all of that kind of nets out into the future? Like what gives you optimism or what fuels your pessimism when you think about the world of tomorrow?

So I used to say that I was a depressed optimist, and these days I'm a depressed and frightened optimist because of the asymmetry, the point is exactly right. And it is horrifying, it's terrifying. The things that give me optimism are the ways in which we are taking advantage of technology as a form of liberation of what's best about humans: our connection with one another, our caring about one another, our creativity, our senses of humor, our ability to know with others, to learn with others.

From my point of view, the technology has to a large degree liberated what's best about us. That for me is a tremendous source of optimism, but it has also asymmetrically liberated what is worst about us, which is pretty scary.

So how can people keep up with you and follow what you do? And aside from buying your books and the most recent is Everyday Chaos, how can they follow you?

Well I'm on Twitter I'm @DWeinberger on Twitter and I used to do a lot of blogging and now do occasional blogging @Johotheblog. And then I continue to post at various sites and write for sites and magazines and the like.

Well, I want to thank you for a fascinating interview and I hope you'll come back and keep the conversation going.

I would love to. This is a very stimulating interview Byron, it's great to talk with you. Thank you so much.