Episode 75: A Conversation with Kevin Kelly

In this episode Byron and Kevin dialogue about the brain, the mind, what it takes to make AI and Kevin's thoughts on its inevitability.

:: ::

Guest

Kevin Kelly has written books such as 'The New Rules for a New Economy', 'What Technology Wants', and 'The Inevitable'. Kevin also started Wired Magazine, an internet and print magazine of Tech and culture.

Transcript

Byron Reese: This is Voices in AI, brought to you by GigaOm, and I'm Byron Reese. Today I am so excited we have as our guest, Kevin Kelly. You know when I was writing the biography for Kevin, I didn't even know where to start or where to end. He's perhaps best known for a quarter of a century ago, starting Wired magazine, but that is just one of many many things on an amazing career [path]. He has written a number of books, The New Rules for a New Economy, What Technology Wants, and most recently, The Inevitable, where he talks about the immediate future. I'm super excited to have him on the show, welcome Kevin.

Kevin Kelly: It's a real delight to be here, thanks for inviting me.

So what is inevitable?

There's a hard version and a soft version, and I kind of adhere to the soft version. The hard version is kind of a total deterministic world in which if we rewound the tape of life, it all unfolds exactly as it has, and we still have Facebook and Twitter, and we have the same president and so forth. The soft version is to say that there are biases in the world, in biology as well as its extension into technology, and that these biases tend to shape some of the large forms that we see in the world, still leaving the particulars, the specifics, the species to be completely, inherently, unpredictable and stochastic and random.

So that would say that things like you're going to find on any planet that has water, you'll find fish, it has life and in water you'll find fish, or will things, if you rewound the tape of life you'd probably get flying animals again and again, but you'll never, but I mean, a specific bird, a robin is not inevitable. And the same thing with technology. Any planet that discovers electricity and mixed wires will have telephones. So telephones are inevitable, but the iPhone is not. And the internet's inevitable, but Google's not. AI's inevitable, but the particular variety or character, the specific species of AI is not. That's what I mean by inevitable—that there are these biases that are built by the very nature of chemistry and physics, that will bend things in certain directions.

And what are some examples of those that you discuss in your book?

So, technology's basically an extension of the same forces that drive life, and a kind of accelerated evolution is what technology is. So if you ask the question about what are the larger forces in evolution, we have this movement towards complexity. We have  a movement towards diversity; we have a movement towards specialization; we have a movement towards mutualism. Those also are happening in technology, which means that all things being equal, technology will tend to become more and more complex.

The idea that there's any kind of simplification going on in technology is completely erroneous, there isn't. It's not that the iPhone is any simpler. There's a simple interface. It's like you have an egg, it's a very simple interface but inside it's very complex. The inside of an iPhone continues to get more and more complicated, so there is a drive that, all things being equal, technology will be more complex and then next year it will be more and more specialized.

So, the history of technology in photography was there was one camera, one kind of camera. Then there was a special kind of camera you could do for high speed; maybe there's another kind of camera that could do underwater; maybe there was a kind that could do infrared; and then eventually we would do a high speed, underwater, infrared camera. So, all these things become more and more specialized and that's also going to be true about AI, we will have more and more specialized varieties of AI.

So let's talk a little bit about [AI]. Normally the question I launch this with—and I heard your discourse on it—is: What is intelligence? And in what sense is AI artificial?

Yes. So the big hairy challenge for that question is, we humans collectively as a species at this point in time, have no idea what intelligence really is. We think we know when we see it, but we don't really, and as we try to make artificial synthetic versions of it, we are, again and again, coming up to the realization that we don't really know how it works and what it is. Their best guess right now is that there are many different subtypes of cognition that collectively interact with each other and are codependent on each other, form the total output of our minds and of course other animal minds, and so, I think the best way to think of this is we have a ‘zoo’ of different types of cognition, different types of solving things, of learning, of being smart, and that collection varies a little bit by person to person and a lot between different animals in the natural world and so...

That collection is still being mapped, and we know that there's something like symbolic reasoning. We know that there's kind of deductive logic, that there's something about spatial navigation as a kind of intelligence. We know that there's mathematical type thinking; we know that there's emotional intelligence; we know that there's perception; and so far, all the AI that we have been ‘wowed’ by in the last 5 years is really all a synthesis of only one of those types of cognition, which is perception.

So all the deep learning neural net stuff that we're doing is really just varieties of perception of perceiving patterns, and whether there's audio patterns or image patterns, that's really as far as we've gotten. But there's all these other types, and in fact we don't even know what all the varieties of types [are]. We don't know how we think, and I think one of the consequences of AI, trying to make AI, is that AI is going to be the microscope that we need to look into our minds to figure out how they work. So it's not just that we're creating artificial minds, it's the fact that that creation—that process—is the scope that we're going to use to discover what our minds are made of.

You know I get a fair number of AI folks on here, and when I ask them how much they're inspired by biology when they're thinking about AI, there actually isn't all that much. So do you really think that as we make better tools, that we’re somehow going to get profound insights into how we do what we do?

Absolutely, and because neural nets are inspired by the neuron. They're not in any way like neurons, but they're certainly inspired by the idea that you have this network of neurons that are triggering each other and they have different weights etc. So, it is inspired, but there's a limit to that and the ideology I would use is that we were inspired by nature to try and make a flying machine, to make humans fly. All the initial attempts in the beginning used the same single mode that the four different types of flying that we have in insects, in birds and bats and dinosaurs [have], which was flapping wings, and that didn't work.

There was failure after failure if we try to fly with flapping wings, and it wasn't until we synthesized an entirely new type of flying that was not really present on this planet until that time, which was fixed wing flight. You strap a barn door to a propeller and it flies, so there was a wholly new type of flying in order for us to make artificial flying. I think we'll do the same thing. There's no doubt that airplanes were inspired by biology and I think we will be inspired by these things and we will be able to go back once we understand it and understand ourselves better for sure. we haven't gotten there, but I think that inspiration doesn't mean that we aren't going to imitate biology.

So humans have these abilities, that we just don't know how to instantiate in a machine. It's like you can train up a human with a data set of one, show them one whatever mythical creature and all of a sudden a human can spot that upside down, backwards, underwater, covered with peanut butter, all the rest. We do that really well and we have all of these cognitive abilities that we're not even really anywhere with machines yet. Do you think that it's progressive or do you think that it's the one trick you'd say we've learned around perception is, “Great, good job with that, now go back and start over, and now try to go back and figure out how emotional intelligence works”—that it has nothing to do with perception?

Yeah, I think it is incremental in this sense.  I wouldn't be surprised to find out in the long train of  the history of science that our advances, our discoveries in perception and neural nets and stuff, led to the next step. In fact we may even use in some ways the ability to do perceptions to help us create the next step, just as once we invented the saw. The saw helped us make hammers and saws and hammers helped us make drill presses and so, there is a sense in which this is a network that moves forward, and that each step enables and makes tools to do the next step.

I think, yes, it will be incremental, and that we are on a progressive movement and that, even knowing what not to do, what doesn't work from the previous discovery can help us go forward. But I would disagree with the common idea that we're on a progression that is exponential. This is the riff of the singularities, that AI is somehow following an exponential curve right now. And there's absolutely no evidence whatsoever for that that I’ve been able to find. It's absorbing input at an exponential rate in terms of the number of processors or storage dedicated to this, but there's no sense in which the output, the smartness is increasing exponentially. Going back to the problem that we don't have any definitions or metrics for this, we have some things, error rates approaching human level; error rates are decreasing to human levels. But that's again, not really exponential over the long term.

So, I think there is progress and as we develop and come to know and understand different types of cognition, we will make progress, but of course, this is one of those instances where we're climbing a mountain, and we think we know how high the mountain is, but each time we go a step higher up the mountain, we realize that the mountain is higher than we thought. And so there is this sense in which the summit is receding from us because the higher up the mountain we go, the higher we realize the mountain is, and so, even though we are climbing up and maybe even accelerating our climb, the goal and the vista and the territory is becoming bigger. We realize that it's bigger and bigger than we ever believed, and so there's this sense in which we’re almost kind of going backwards.

You know we have these brains which presumably are the source of our cognition, and then those brains have a certain bundle of characteristics, capabilities that don't seem to be something an organ can do, like a sense of humor (a liver doesn't have a sense of humor). And we call that the mind, and I notice you always use the word 'mind,' and not 'brain.'  What distinction, if any, are you drawing there, and how do you think of the human mind as distinct from the brain?

Yeah so, I think there's the brain/mind paradox. I think all minds need some kind of brain, -- they need a substrate. There's this idea that computation, whatever it is, is disembodied, is really kind of wrong and that's one of the reasons why I disagree with the singularitan view that you can port minds—the same mind in different substrates—that it doesn't matter where you do your computation. In fact it does matter.

The Turing hypothesis is that computation is universal if you have infinite tape and time. And the real world's computations have finite resources, and so once you have finite resources, the matrix, the substrate in which you do computation, or the kind of brain that you have will determine the kind of mind that you can have. So there is a unity, a dependency between those two, and I use mind also because I want to emphasize the fact that this is not a binary thing.

There's a tendency in talking about AI to think there's AI or there's not. It's there or it's not; it's human intelligence or it's not. None of this stuff is at all binary. There [are] continuums, gradations, and there's multiple dimensions to it. There's many different types of mind, many different types of consciousness; there's many different levels of consciousness; there's many different varieties of mindfulness. I think that a bacteria has a ‘mind’ to some extent or an awareness to some extent. So do grasshoppers, and we have to incorporate that continuum in our discussions. If we do, I think we'll be a little bit more ready, more prepared to just discover the things that we need to about the levels of complexities of AI mind that we're interested in.

And so you brought up consciousness, which is the fact that we have an experience of the world, as opposed to just measuring it. Right now all a computer can do is measure temperature, but we can feel warmth. Is that something in your view that comes from the mind? And is that something that we'll be able to build [in] a machine? Will we be able to build a conscious machine?

So, again, I would say that already machines have some level of consciousness that is not a conscious machine. Just like I believe there is consciousness in animals to various degrees and levels; just like I think there's intelligence in animals to various degrees and levels. For some reason when people talk they think that there's only one... that either you're conscious or you're not, or you're intelligent or you're not. No, no, no, there are many levels, and I think already things that we have made have some of those aspects of it, and as we continue to make [them] more complicated, we will make different varieties and different grades of that in the machines.

The best way I think of thinking about AIs, plural, is to think of them as alien beings, alien things, alien animals, whatever it is, because it's not as if we detected or met an alien being on another planet. They could be conscious, they could have a high level of consciousness, high level of complex intelligence, but be completely unlike us. And we might have a difficulty of determining whether they were even conscious like us or not.

The answer is that it's a different kind of consciousness. They're going to have different sense of humor, so the AIs can certainly program in humor, creativity to the machines, but the thing about them is they're not going to be funny like us. They're going to be like alien funny; they're just going to have different sense of humor; they're going to have a different sense of creativity. That is a feature not a bug, because as we work with these machines that are creative, that are funny, is that they're going to help us to think differently and do things differently, and that is the engine of creation and wealth: is doing things differently.

When we are connected to all the humans 24 hours a day, all 7-8 billion of us all the time, it's going to be really hard to have a different idea to do something differently when we're connected all the time. So having these alien beings that are funny in a different way, and creative in a different way and they think in a different way—working with us—will help us to be different.

So one more question on consciousness... you say bacteria are [conscious] to some degree, and grasshoppers. What about inanimate objects, what about plants, what about Gaia? Are there things that you think have some amount of self-awareness [and] experience the world, other than animals?           

Well there's definitely a type of intelligence in plants. In fact I think there's a new documentary or series that explores… that was a controversial thing to say, it was crazy talk just even 30 years ago. But there's been increasingly a decent body of academic and scholarly scientific evidence about the variety of smartness, we'll call it, that is in plants, which is that there is a type of intelligence, and it's again, it's on the spectrum. All systems exhibit various levels of this. Any sufficiently complex system, like Gaia, like the earth, that tends to want to create these stable feedback loops and strange attractors, if it has any kind of learning feedback loops, it can exhibit certain types of smartness.

So yes, and what I think we're in the middle of, or what we're at the beginning of doing, is starting to map what the varieties of learning are, what the varieties of smartness are, what the varieties of intelligence are. We'll have a better vocabulary to talk about this than we do right now, where we are kind of bound by the fact that we're very parochial in our talk about intelligence, because we think there's only one type, which is a human kind of intelligence. We tend to think of our own intelligence as the general purpose [kind].

This is the ‘holy grail’ of the AI community—is to make an artificial ‘general’ intelligence. As if there is a general purpose intelligence, and I think that's a very misguided idea. It's like saying, "I want to make a general purpose organism." Any organism by definition, has to be specific. You can't optimize everything in anything real, that's the engineering maxim.

You cannot optimize everything. You always have to have tradeoffs. That's the first law of engineering, and so everything, including us, and our minds, we have a very, very specialized kind of intelligence. Once we start to map all of these kinds of intelligences that we make, the zoo intelligences that we're going to create, and if we have the fortune to interact with other intelligences around the galaxy, we will understand that human intelligence is way off in the corner. It's a very peculiar, weird, singular, specific kind of collection of intelligences. It's not at all ‘general.’

You know another word like intelligence that there's no agreed upon definition for, is ‘life,’ or ‘death’ for that matter. Do you believe that these intelligent systems, like computers today that have some amount of consciousness, do you believe that they're alive?

Yeah, I think ‘life’ is another one of those continuums. I think that there's a certain kind of life that viruses have, and that bacterium have a more complex version of life and that as you become more complicated and sophisticated, and more mutualistic, etc. that you have increasing different varieties of life, and you can say that there's more life in a primate than there is in a grasshopper.

What you're really saying is that there's more complexity in that life, that life is more complex, that life is more complicated, that there's more levels of it, that there's more variety, more sub-varieties. So yes, absolutely, there are aspects of life that are already in machines. The internet as a whole exhibits a high number of life-like attributes that are also found in natural systems like the immune system. And so it's not an exaggeration at all to say that the internet has some life in it. The question of whether it's alive or not—that is an arbitrary, and I think, misleading threshold that makes it binary, and it's not binary.

So help me understand something here. If human intelligence isn't general, it's parochial. If human life isn't anything that's particularly deducted, life is just on a continuum, and if consciousness isn't unique to humans, it exists on a continuum, what, if anything, is special or unique about humans?

Well what's unique about us is the fact that the only way you'll ever have anything like us is to replicate our bodies and our tissues and everything else exactly like it. So there's not going to be anything like us in the universe. That makes us unique.

Well you throw a deck of cards up in the air and it comes down on the floor in some arrangement that's literally unique, but there's nothing special about it.

What do you mean by ‘special’?

Well I guess I would say it this way: We've had this long claw up from savagery to civilization, and along the way we invented something called human rights. The thesis is that there are things you cannot do to people, no matter what, because "blank." If all of a sudden there is no blank, then it feels like this sort of expansive view of "we're all conscious, and we're not particularly intelligent, and we're all alive and it's all good," undermines the very basis for why we have human rights to begin with.

Well no, what I think we're going to do is we're going to continue to expand human rights into machines. We going to understand that rights are not just about humans. If we met alien beings, would we assume that they don't have rights, just because...? And so what we're doing, what we're understanding is that ‘rights’ are not just about humans, rights is universal. There's always corresponding rights and responsibilities. First, you can't talk about rights without responsibility, those are always linked. So we have responsibilities about the things that we make just as they have rights, and they have responsibilities. There's nothing that would change religion, or the ideas of spirituality or other things on this planet than to have contact with ET, to have other beings, aliens coming in contact, that would change everything. We would say, "well do you believe in God, where do your rights come from, how do our rights and your rights… where are we with this?"

And then if there were multiple civilizations you contacted, it would really completely radicalize our conception about our place in the universe, because we realize, well, there's always other ones, there's billions of other species. Well who knows when that will happen, but we can say for certain that we are going to create on this planet artificial aliens, and they will do the same thing to us that having a visitor from another planet would do, in terms of forcing us to understand that our whole concept of—that the dignity that we have—is not because we're the only ones.

In fact, it comes from the fact that we are one of many different varieties in the galaxy that's possible. I think this is a dethronement, this is part of this continuing…I think it was the first or fourth turning which I think is the dethronement, Copernicus realizing that we're not the center of the solar system, that we're actually on the edge. There was dethronement that came from Darwin, [when] we realized that we weren't at the top of the heap, we were just part of a radial expansion outwards, that all things, that every single species on the planet today is evolved as humans. We've all undergone 4 billion years of evolution, so it's not like we're more highly evolved [than other species]. We're just as evolved as the slime molds, and slime molds are just as evolved as us, and so, that’s dethronement. AI is actually going to bring about another dethronement, where we realize that our minds and consciousness are actually part of a larger possibility space, and that the idea that we have of the kind of rights and dignity is something that's not unique about us, but is actually embedded into the universe, and that applies to all beings.

So I'm still having a little trouble following that. If bacteria are as evolved as us, then why isn't using antibiotics a form of genocide, why isn’t that morally reprehensible?

Say that again?

Well if bacteria are as evolved as we are, and they have some amount of consciousness, some amount of life, why isn't taking an antibiotic and wiping them out wholesale, a kind of genocide?

Yes, so it is a kind of genocide, but it doesn't have the same meaning or consequence as wiping out something that has more complexity that has more interdependencies, that has more consequences. First of all, the other thing about those simpler kinds of life is that there's less variation between them, so they work more as populations, and of course except for smallpox, we haven't eliminated them at all, we've just reduced their populations.

When we talk about those kinds of populations, the differences between individuals is not very important, but when you have higher animals, and you have more [cases] where a single individual of that species can have more consequences, then they have more standing as an individual, so we're not just concerned about populations there. The individual actually matters and that's when we come to primates and animals, they have more complicated beings, they're equally evolved, but they're not equally complex. Some things are more complicated and have larger pools of interdependencies, and they make more of a difference in the world, and can make more of a difference, and therefore, we treat them differently. I'm not saying that we treat everything the same, that all things are equal. Saying that everything is equally evolved doesn't mean that they're equal, it's just that they're equally evolved.

Right, but again when you're trying to say, "okay, am I going to save the person, or save this colony of bacteria inside of them?"

You have to ask yourself, what are the consequences, what difference does it make?

That's not a moral question, that's simply a question of effects. But it sounded like you were saying, that because we're more complicated, and the complicated things have more—I'm going to use the phrase—‘moral worth,’ than less complicated things.

Yeah I would say that that's, so. We care about say, monkeys, apes, more than we care about a tiny fish, and why is that?

Well that I think is the question.

Yeah, and the reason is, it is because, they have a larger agency in the world. Those individuals have a much larger agency. The gorilla has much more agency in the world than that little fish does, and so therefore, it has rights and responsibilities, it has consequences, and then, we are paying attention to that. We are treating it with that concurrent degree of its agency, we're recognizing it's agency and its ability to influence and change things, and we're honoring that, more than the kind of influencer degree of matter that the little fish has.

But still, that seems to imply that an iPhone has more moral worth than a hammer.

I think that's true.

Hmm, and so at some point, when our machines get more complicated, the old trolley problem of: ‘Do I save the driver or save the passenger going off a cliff?’… it all changes. It becomes, ‘Of course we're not going to damage a bumper on this incredibly complicated car, just to save a person.’

No, we will have to work in to those equations, which, by the way, are paradoxes, [and] don't really have a good answer. We will have to work into the fact that some of these things that we made to have some degree of consciousness, that have some variety of consciousness; that have some variety of intelligence; that has some variety of creativity, and it's like, well, can we just turn them off? And I think we're going to have this question.

No, some cases we'll say, no, we can't. Or they'll have other priorities and so, we'll say yes, between, if one of these has to be turned off, because they both can't be used and there's limited electricity, whatever it is, which one will we favor? Well we'll stick to the one that has more agency, that has more dependencies, that is in some ways, more likely to create more options in the world. We do that all the time, we continue to do that, but now we will have additional things to calculate into those equations, because these things—we are designing them to make decisions, to have influence, and that's where we're going.

Already these types of AI systems are making decisions for us. We're often not aware of it, but they're making decisions about who gets mortgages, or how long [someone’s] probation is, and that will continue. As they have more and more agency, they will have corresponding standing, and so we have to incorporate that into our decisions, how we decide things ourselves.

Last question, down in this vein: I hear what you're saying, but, it doesn't feel like we would ever apply this kind of view that you're proposing or offering within a single species. You wouldn't say "this human has inherently more worth because they have more standing or they have whatever, they're more complex" or whatever. As you said a minute ago, there's an enormous amount of variety within any two humans, but we sort of believe they all have equal moral worth. Is that just an outdated notion or what?

I think it's a very, very profound and fair question. I think that this is a belief that we have, and I think you're right. I don't see us changing that for a very, very long time. The problem is making those calculations are just impossible to do in any sort of unbiased way. So we've agreed not to do calculations about ourselves, and I don't see any movement away from that. But I also don't see that necessarily preventing us from doing it to everybody else.

And so you liken AI to electricity, and you say, "just like back in the day, when you got electricity you could take some tool you had and plug it into the wall, and all of a sudden, this electricity came through and then animated it." You've likened AI to that. I assume electricity and the industrial revolution are good things. Do you think, when you consider the future and this kind of AI-enlivened world, is it going to be as positive as the industrial revolution… it empowers people and raises standards of living and it benefits everyone and all of that? Or is this in its effect going to be different?

I think it's going to be even more beneficial than electricity has been. The first thing you have to acknowledge is the reality of progress: that if you look at this in any scientific way, at the evidence, progress has been real, as it was electricity and many other things. But basically, certainly since the advent of science in the last couple hundred years, everything that we care about—human life—has gotten better, on the global average, a few percent.

There has undoubtedly been huge numbers of new problems that have been unleashed by the new technologies of the industrial revolution and all the problems we have today are all techno-genic, they've all been caused by the technologies of the investor, and that would be kind of a sad state. The thing is, the world is better but not by very much, it's just a few percent better.

When you look around the world, you see 49% of crap and harm and terrible stuff and it looks, [like] that's a lot, but we have to be 1% better, and that 2% delta compounded over a year, that's civilization, and civilization is built on a very very narrow delta, but that's all we need. So yes, I think 49% of AI is going to be terrible, it's going to be destructive, it's going to be toxic, but, 51% will be great and good and will unleash...

But that all sounds like an article of faith. There's no scientific reason it has to be 49/51 in favor, it could be 51/49 the other way, so, why do you say that so confidently?          

Because of history, because of the past, that's what it's been. It's been a ratio for hundreds and hundreds of years. Now that could change tomorrow, but the probability, the physical probability is that's not going to change, so, my optimism comes from history.

So, do you think that the worry some people have, that AI is going to eat all the jobs, or it's going to destroy privacy, or it's going to enable a low-cost warfare, it's going to cheapen human life, do you think those for the most part are real, they're just counterbalanced by other things, or would you take issue with some of this?

I think that the fears are real, but I think they're misplaced. I don't think it's happening. In fact, all of the evidence so far of what happens when you bring automation and AI into workplaces is that it doesn't breed unemployment, that in fact, it changes the nature of work for most people. Certainly there are, obviously certain occupations, we don't have buggy whip makers anymore, we don't have telephone operators plugging in different connections, so there are certainly going to be some types of occupations that will go away. But, of course, and this is [what] the argument in the past has been, we invented so many other new ones, that we have more jobs now than ever before.

I think the same thing will happen even more so, with AI. The thing that you have to remember that I want to keep emphasizing is that AIs don't think like humans. We value other humans so much, we like the sense of humor, we like the jokes told by other humans we like to be around. We like to experience other humans and that's good news for us, because what robots and AIs are very good at, are things where efficiency and productivity matters, and that happens to be an area that humans don't really like to have in their jobs. They don't really like to be doing things over and over again.

As we learn how something is done, we will give it to the robots, and then that liberates us to play around and invent new things that we want, and to have human experiences and to hanging out with our friends, and maybe then that becomes a narrative [of something] we will be willing to pay [for].

The one area in our economy where there is rising prices in a sea of everything else becoming a commodity are experiences. Have a meal cooked by a 5-star chef or something and meet the chef, that's what we're going to pay for that's all very very human. So I think there'll be lots of tasks, and as we invent new things that we want, and in the beginning, we don't know what it is and it's discovery and it's highly inefficient, and it's art, whatever it is. Then as we do it enough times, we understand how it works and then we give it to the machines, liberating us to do new things.

Basically the way I say it, the main job of humans is going to be to invent jobs for the robots. I think we will work with them that way, and we will work with robots that are creative, but only because they think differently than us. We will still prefer to hang out after work with other humans. I think there's a misplaced fear [in] society that we're making artificial people that are smarter than humans, which is a meaningless thing. It doesn't have any meaning what smarter than human is, and they're all going to be different than human. So I think the total effect of this is going to be, not without problems, there'll be plenty of problems, but the net gain, will be progress.

Well that's a great note to leave it on. My closing question is a short one. What are you working on right now, what can we look forward to reading or experiencing [something] of yours in the next year or two?

Yeah, I'm working on looking at what comes after the web, and I think I see what it is. There's a bunch of people working on it. It has some clunky names right now, the VCs are calling it the ‘AR cloud.’ David Gelernter wrote a book about it called Mirror Worlds. Facebook wants to call it The Atlas. It's a 3-D spatial representation of ‘the real world’ that is viewable by many different devices including the magic glasses, where you walk down the street and you can see this invisible re-created world in 3-D.  It's a 3-D volumetric spatial representation of the actual world that's the digital twin of it, and it's going to be the place that all the information about these places and things are organized. It makes the rest of the world machine readable.

The web, the information machine, you can apply algorithms and search and stuff, and that was the huge big platform of the web and everything changes. The second platform was this hyperlinking of all human behavior in humans, and we have the social graph which is making human behavior machine readable for better or worse. Now we're going to make the rest of the world machine-readable, and the rest of all places and objects put into this world that's machine readable and it's like a virtual map that's the size of the world. It's a 1-to-1 map, and this is the same world that [self] driving cars, robots will see. When they look at the world, this is the world that they would see, and we'll be able to see into it as well, and it comes into our living rooms. It's the entire world that's digitized and spatially represented, and that's the organizing principle, because we're made, we've evolved to be spatially the navigators. That is the next big platform after the web and mobile phones.

Well thank you so much for a wonderful show, I see people can follow you and keep up with you at kk.org, your latest book is The Inevitable, and it was great, great fun having you on the show, thank you.

I really appreciate the invitation, thanks for the great questions.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.