Episode 65: A Conversation with Luciano Floridi

In this episode Byron and Luciano discuss ethics, information, AI and government monitoring. They also dig into Luciano's new book "The Fourth Revolution" and ponder how technology will disrupt the job market in the days to come.

:: ::

Guest

Luciano Floridi holds multiple degrees including a PhD in philosophy and logic from the University of Warwick. Luciano currently is a professor of philosophy and ethics of information, as well as the director of Digital Ethics Lab at the University of Oxford. Along with his responsibilities as a professor, Luciano is also the chair of the Data Ethics Group at the Alan Turing Institute.

Transcript

Byron Reese: This is Voices in AI, brought to you by GigaOm, I'm Byron Reese. Today our guest is Luciano Floridi, he is a professor of philosophy and ethics of information, and the director at the Digital Ethics Lab at the University of Oxford. In addition to that, he is the chair at the Data Ethics Group at the Alan Turing Institute. Among multiple degrees, he holds a Doctor of Philosophy in philosophy and logic from the University of Warwick. Welcome to the show, Luciano.

Luciano Floridi: Thank you for having me over.

I'd like to start with a simple question which is: what is intelligence, and by extension, what is artificial intelligence?

Well this is a great question and I think one way of getting away with a decent answer, is to try to understand, what's the lack of intelligence. So that you recognize it by spotting when there isn't intelligence around.

So, imagine you are, say, nailing something on the wall and all of a sudden you hit your finger. Well, that was stupid, that was a lack of intelligence, it would have been intelligent not to do that. Or, imagine that you get all the way to the supermarket and you forgot your wallet so you can't buy anything, well that was also stupid, so you would need intelligence to take your wallet. You can multiply that by, shall we say, a million cases, so there are a million cases in which you can be, or—just to be more personal—I can be stupid, and therefore I can be intelligent by the other way around.

So intelligence is a way of, shall we say, sometimes, coping with the world in a way that is effective, successful, but it also can be so many other things. It's not intelligent, or it would be intelligent not to talk to your friend about the wrong topic, because that's not the right day. It is intelligent, or not very intelligent, to make sure that that party you organize, you don't invite Mary and Peter because they can't stand each other.

The truth is that we don't have a definition for intelligence or vice versa, for the lack of it. But at this point, I can sort of recycle an old joke by one of the judges in the Supreme Court, I'm sure everyone listening to or reading this knows that very well, but always ask for a definition of pornography, as you know, he said, "I don't have one, but I recognize it when I see it." I think that that sounds good—we know when we're talking to someone intelligent on a particular topic, we know when we are doing something stupid about a particular circumstance, and I think that that's the best that we can do.

Now, let me just add one last point just in case, say, “Oh, well isn't that funny that we don't have a definition for such a fundamental concept?” No it isn't. In fact, most of the fundamental concepts that we use, or experiences we have, don't have a definition. Think about friendship, love, hate, politics, war, on and on. You start getting a sense of, okay, I know what we're talking about, but this is not like water equal to H2O, it's not like a triangle is a figure with a plain of three sides and three angles, because we're not talking about simple objects that we can define in terms of necessary and sufficient condition, we're talking about having criteria to identify what it looks like to be intelligent, what it means to behave intelligently. So, if I really have to go out of my way and provide a definition—intelligence is nothing, everything is about behaving intelligently. So, let's get an adverb instead of a noun.

I'm fine with that. I completely agree that we do have all these words, like, “life” doesn't have a consensus definition, and “death” doesn't have a consensus definition and so forth, so I'm fine with leaving it in a gray area. That being said, I do think it's fair to ask how big of a deal is it—is it a hard and difficult thing, there's only a little bit of it, or is it everywhere? If your definition is about coping with the world, then plants are highly intelligent, right? They will grow towards light, they'll extend their roots towards water, they really cope with the world quite well. And if plants are intelligent, you're setting a really low bar, which is fine, but I just want to kind of think about it. You're setting a really low bar, intelligence permeates everything around us.

That's true. I mean, you can even say, well look the way the river goes from that point to that point, and reaches the sea through the shortest possible path, well, that looks intelligent. I mean, remember that there was a past when we thought that precisely because of this reason, and many others, plants were some kinds of gods, and the river was a kind of god, that it was intelligent, purposeful, meaningful, goal-oriented, sort of activity there, and not simply a good adaptation, some mechanism, cause and effect. So what I wanted to detach here, so to speak, is our perception of what it looks like, and what it actually is.

Suppose I go back home, and I find that the dishes have been cleaned. Well, do I know whether the dishes have been cleaned by the dishwasher or by, say, my friend Mary? Well, looking at the dishes, I cannot. They're all clean, so the output looks pretty much the same, but of course the two processes have been very different. One thing requires some intelligence on Mary's side, otherwise she would break things and so on, waste soap, and so on. And the other one is, well, simple dishwashing machine, so, zero intelligence as far as I'm concerned—of the kind that we've been discussing, you know, that goes back to the gray area, the pornography example, and so on.

I think what we can do here is to say, look, we're really not quite sure about what intelligence means. It has a thousand different meanings we can apply to this and that, if you really want to be inclusive, even a river’s intelligence, why not? The truth is that when we talk about our intelligence, well then we have some kind of a meter, like a criteria to measure, and we can say, "Look, this thing is intelligent, because had it been done by a human being, it would have required intelligence." So, they say, "Oh, that was a smart way of doing things," for example, because had that been left to a human being, well I would have been forced to be pretty smart.

I mean, chess is a great example today—my iPhone is as idiotic, as my grandmother's fridge, you know zero intelligence of the sort we've been discussing here, and yet, it plays better chess than almost anyone I can possibly imagine. Meaning? Meaning that we have managed to detach the ability to pursue a particular goal, and been successful in implementing a process from the need to be intelligent. It doesn't have to be to be successful.

So you are, at Oxford, the professor of philosophy and ethics of information. Tell me what that is? Are there lots of professors of philosophy and ethics of information in the world or are there very few, and what does that encompass exactly?

Good point, it's quite a unique position, it was established by the university to cover the whole area that goes under ethics and, shall we say, conceptual issues raised by digital technologies at large. So, the point is that I also have one of, kind of, many legs I have in the cognitive science department, I grew up as a logician, so that's the link, and I studied the effects and impact of computers in society. And half way through my career, I became essentially entirely curious, interested in understanding what lies behind, what difference it makes in terms of our culture, our self-understanding, the way we tackle the world, the way we make sense of ourselves and the world, and that's what I mean by philosophy.

For those who listen to us, there's an easy way of understanding this. Problems are like animals, they consume resources, instead of talking about food, you talk about maybe computational power, or time to process stuff, and if you take that view, then you have problems that consume and take up resources. Meaning if you want to know what's in the fridge, you have to open the fridge, there is no way of doing that by sitting tight and pretty on a chair. If you want to know whether you have enough food for enough people, well maybe some math would be required, so that particular problem consumes mathematical resources. And then there are other problems, that are open, they don't consume mathematical or empirical resources, and I like to call them "open problems," something like, "Shall we have a party next Friday?" Well, that's not about, yes or no in terms of check how many people are coming, or do we have enough wine? It's more about deliberation, thinking, pro et contra, can we find an agreement, is it a good idea? Can we be intelligent about it?

And I guess the implication is, there really aren't ethics around information, there's ethics around the use of information, is that true?

Yes indeed. So the ethics of information is a bit like environmental ethics, it's not that the environment has ethics in the same sense that which you and I might have ethics, but it's more the ethics of or about all that concerns the environment. So today, we have moved from talking about good old days, you know, the 1960's when we used to speak of computer ethics—meaning the ethical problems caused by computers—then in the 1970's/80's, maybe still 1990's, we started talking about information ethics, because we realized that it wasn't just about computers, it was so much more, it was the internet, it was then the apps, and then today IoT, internet of things and so on. And more recently, we've been talking about machine ethics, robot ethics, then data ethics—the truth is that, we've been sort of inclusive more and more through time. The last, sort of, implementation is digital ethics, meaning, as in environmental ethics, the ethics not owned by the digits, but that concerns that particular environment, that is the digital environment, according to maybe the infosphere, and life that we have inside that particular environment.

Just to kind of understand how you look and think of the world, I'll give you just one kind of stock ethical problem, and just tell me as an ethicist, how you approach it. So, we'll talk about the state's use of information in an ever more digital age, and the setup is that, in the past, the state could spy on people, but they couldn't do it very efficiently because they're just too many people doing too many things, you can't listen to every phone call. But, with computer technology, you have AIs that can read lips, so any closed circuit TV could probably understand what anybody within its range is saying, it can digitally recognize the words being read on every phone conversation, every email can be scanned, and that information can be used by a state, and is in fact, used by states to model citizens’ behavior, and that can be used for the noblest or the basest of goals, and so if you're advising a government, or if you're thinking about the philosophy and ethics of how that information is used by a government, where do you begin?

Well this particular case, I think we need to understand exactly the relationship between the goals that we want to achieve, what kind of society we want to build, and—I hope I'm not being confusing here—how we get there, so the process, in the following sense: we come from a world, as you said, where this, shall we say, "monitoring," this "spying" on citizens, was limited by analogue means, we just simply couldn't do it, because we didn't have the power, the computational abilities, the memory, it was too expensive, etcetera. So what we assumed at the time, imagine last century, was that that was the problem, and it was linked to a second problem, which was, okay, “What for?” Maybe it's a good idea if you are running an anti-terrorist campaign, you want to monitor everyone's phone. Why not, if you can save lives etcetera? So in that debate, limited means good goals, I think we missed something that was right there that we couldn't perceive because of the limited means. Namely that we, all of us, are very fragile.

You and I, that fragility emerges immediately when we are, for example, driving on a motorway, and all of a sudden a police car is passing through, well that changes the behavior of everybody immediately. Even the most honest person, even if you're already going under the speed limit, etcetera—you go just a bit more slowly, just to bit more careful, maybe you feel a little bit uncomfortable, because being seen watched changes your behavior. You start wondering whether or not maybe someone makes a mistake and misunderstands you for someone else, or can actually abuse the power that they have and so on.

Now, this analogy is there to show that the old fashioned idea—what means we have, maybe just analogue now digital, more or less powerful, what's the goal, is it a good one or a bad one—misses a third point which is fundamental: whom are you having an impact on? And we're not like bricks on which you can exercise all the pressure in the world, it doesn't matter, we can take it—not true. We're very fragile, in terms of how we perceive ourselves, how we change our behavior, how we modify our goals, how open our choices are to influence. If you start monitoring people, all of a sudden, what you are doing is more than just changing the nature of the means, and fulfilling some ideal goals that are, imagine for a moment, fantastic, what you're also doing is to change their behavior, in a way that is probably, to some extent, irreversible. Well, that is a major problem. So even if we forget, well we shouldn't, the means are not justified by the end, but even if they were, well remember that comes at a high cost, which is influencing people's freedom, manipulating their autonomy, changing their self-determination. Now, that has to be really worth the effort.

So back to your question—sorry for this long, essentially, introduction—you need to ask the government, say, "Are you sure this is worth the damage that you're causing?" Maybe it is sometimes. So, in a concrete sense, maybe undergrounds with plenty of CCTV cameras—because in London we've had experience of several terrorist attacks—maybe that's a good idea. But, say, face recognition, in primary school, I don't think so, and that is the kind of ethical perspective that I support.

And I assume that that is the scenario of responsive government, presumably, of open society, free debate and government officials that are responsible to the citizenry, but what about in places around the world where they don't have that, isn't that a bit of an intractable problem? Or, are there technologies that are coming along that will allow people in those environments to preserve their privacy and whatnot? Does information, in this case, tilt the balance of power towards oppressive governments, or away from them?

Well I think that oppressive countries or illiberal or more undemocratic places are having a fantastic moment now. These technologies enable a level of oppression and control and monitoring on citizens that is unprecedented. It's a shame, I still believe that digital technologies at large, are much more good than bad, but you can pervert even the best technologies for the worst possible means. And in this particular case, without necessarily naming the usual suspects, but we can if we want, but everybody knows that there are countries out there that are using technologies for the wrong reason, and let me put this as a conditional here, at least from our own perspective, from a western, liberal, democratic society.

Also, we need to remember that in different cultures, maybe, for example, time plays a big role, and just to provide a simple example, maybe in country with a long history, with not many many hundreds of millions of citizens, maybe time can also explain why the movement towards democracy is going to take centuries, not years, and perhaps if you want to be really optimistic, what we're seeing is just a little glitch or blip in a positive story towards more respect for individual freedoms. That is a real effort in being optimistic.

So you have a new book out, called The Fourth Revolution, can you give us the subtitle of the book, why did you come to write it, what are you trying to do with it, give us that story?

With pleasure, so, that book came out almost accidentally. You may imagine a professor from Oxford writes unreadable, indigestible books, those research monographs, that only other colleagues read, so this was a bit of an exception. It was what people called a trade book, and it was justified by a realization that, without even noticing, I had been writing things for newspapers or for other venues, that were slightly more accessible, slightly more interesting, but above all based on a simple idea, which was: at the time when I started all this, people couldn't care less, late 80's to early 90's, when it was still internet and nothing else around, a lot of discussion was in terms of what these technologies can help us to do. So there was a lot of, say, philosophy of technology, in terms of the power of technology, the change that technology can impress on the world etcetera. So I realized that, yeah that's all true and important to discuss, however, I thought the real difference here is what it makes of ourselves, how we understand ourselves, our self-perception. And then I realized there was a circumstance where we had been there before.

So, the “fourth revolution” in the title, is actually a reference to a classic analysis by Freud, of three revolutions that had already happened, his own included. Freud tells us that there was a first revolution in our self-understanding, caused by Copernicus, we thought we were the center of the universe, and it was such a lovely place to be, the party was all ours. Then Copernicus comes and says, "Well, sorry, you're on a small planet in the middle of nowhere, going around the sun, and even your place around there is not that crucial either." So that was a displacement which changed our self-perception robustly—religiously, socially, etc.

We retrenched and we moved to the central idea of the animal kingdom, so, “Okay, well fine, maybe we are on this small planet, nobody in the universe, but when it comes to planet Earth, we are the queen and the king of the game.” And of course, now the second revolution comes, which is Darwin, which says, "No, actually, you are not that important either, you're part of an evolution, your DNA, many affinities with many other mammalians etcetera, so, get a grip."

Quietly we had also to endorse a third centrality, and this is sort of the Freudian point, it says, "Well look, for all the life of ours on this planet, we have been thinking of ourselves like, transparent, thinking, ‘I can see what is in my mind, I can tell you what it is and what it isn't, and therefore the centrality of the mental space is a unique challenge." Well, of course Freud comes and says, "No, there's much more going on, so there's no centrality there either."

So, along those lines, I realized that what we had been going through in the past few years was a fourth revolution, hence the title, in our self-understanding, because we had thought that we were at the center of the space of information, or infosphere, we were still claiming, after Freud, “Okay, fine, we're not at the center of the universe, we're not at the center of the animal kingdom, we're not at the center of the mental space. However, when it comes to information, we are the only one that can actually play chess, we're the only one who can park their car, we are the only one who can buy the cheapest fridge online, etcetera.” Well you know where we're going with this, Alan Turing, computers, then the internet, then AI, and that centrality has gone out of the window as well.

I think it's good news, don't get me wrong, we're just growing up. We don't have the teenager attitude, where it's all about us, "me, me, me." It's a bit painful, though, as anyone growing up has experienced. Not being at the center of the universe, or the animal kingdom, or the mental space, or the infosphere, it puts you in a different corner, says, “Okay, well maybe we are at the periphery of this game.” and so part of the book, going back to it, is to describe this fourth revolution, the change in our self-understanding, from a philosophical perspective in terms of space, time, experience, interactions. There is also a pinch of politics and ethics at the end of it.

But go into the fourth one a little more, like, what are the implications of that? I mean, I can see how the other three kind of changed our way of thinking. Like, you can tie the Copernican revolution to the Industrial Revolution, by saying it was a mechanistic way of looking at the world, and we extended that by making mechanistic tools, and factories and we built things differently, we looked at the world differently because we had this new kind of way what we thought. If somebody fully embraces that we're in this fourth revolution, or that it has happened or is happening, what do you think will come out of that?

Well, I think that there's something, almost like the season of this particular movie that we are enjoying now—we are on the last episode of the last season, which is unfolding in front of our eyes. So, think in terms of—and I'm just taking one example, although there are many more, I'm happy to discuss the others—the example of the impact of digital technologies and AI on the job market. Well, if you look at only recently, let's say only 20-30 years ago, people were still thinking in terms of, "Oh, of course, jobs in the agriculture sector are disappearing, in industry are disappearing, the third sector, services and so on, that would be growing and growing, that's safe." Well, that safety was due to the lack of understanding of the fourth revolution. Which means, “Well no look guys, if that job is about handling information, we've got machines that do that better than, faster than, more reliably than, possibly a lot of other people.” So a lot of those jobs are going.

If you start looking at the world from this new displacement, a lot of other things are happening that are sort of reorienting our perception. This example of the job market is everywhere in the newspapers, let me give you just two quick other simple examples: still coming from a pre-Turing, pre-Fourth Revolution modern time, say, sort of last century. One is the gluing together of presence and location. So, even the past generation, you could interact with something as long as your presence there was also your location there. Now some people may even remember when there were telephones where you actually had to be at the telephone, so it was ridiculous to ask, to someone answering the phone, "Where are you?" because you knew where that person was, you had called their number, and it was a fixed line, so you wouldn't ask to your friend Peter: "Where are you?" Because you just called Peter at home.

Now, that world has fast disappeared, presence and location have been detached thanks to digital technologies, AI included, and now we can interact, as we ordinarily do—thank you so much, no it doesn't take a professor from Oxford to say this—we can interact online, say with a bank, without being located at the branch where we are doing our business. So my presence and location do not come together as a package anymore.

I can give you another example, law and territoriality. We come from three or four centuries of gluing together the two, we love that. It meant “my place, my law,” and when it comes to you, “your place, your law.” And there was a boundary, physical. Today, that doesn't make any sense, of course, with cyberspace, infosphere and so on. So we have to rethink that particular relationship between physical space/geography, territoriality, and the law. That's why, for example, the last piece of legislation from the European Union, talks about the protection of personal data, everywhere in the world, as long as they belong to European citizens, doesn't care whether you're in Singapore, in Washington, or say in the future in London after Brexit.

Well your comments about jobs, that's a topic that we wrestle with on this show a lot, do I take it to mean you think that these technologies are going to do more and more things that people do, and displace people, but, historically they create as many opportunities for people to use those same tools to do more? Or, do you think that we are entering a place where there's going to be a great shortage of jobs for people?

Oh I think that the real problem is the pace of transformation and therefore that jobs are being disrupted, the job market is being disrupted at a pace unprecedented, as everybody knows...

Hold on let me jump in on that, because I hear that a lot, but I don't actually believe that. So let me push back on that, and tell me where I'm wrong. If you look at the replacement of animal power with steam power in this country, in the United States, we went from producing 5% of our power with steam to 95% in 22 years, so anybody who had anything to do with draft animals and any kind of animal power, boom. The assembly line, when that came along, that was a kind of artificial intelligence that must have been horrifying for people who, you know, no longer do you make a car one at a time in a garage, you now make the cars a hundred at a time at a factory. So, the assembly line, was so transformative, so quickly. Then you had the electrification of industry which, depending on what metrics you want to think about, happened in seven years, something like that, because all of a sudden, if you're using steam power, and the company across town is using electricity, well, that kind of writes itself.

Yeah, you're dead.

Exactly, and interestingly, in the United States, unemployment never once, other than the Great Depression, which wasn't caused by technology, ever got out from between 5-10%. And it's fixed, seemingly, by the business cycle, so even when we're doing these amazing transformations, you can never see that even blip the employment number. So, why would this be any faster? It feels agonizingly slow quite frankly.

Well, let me qualify that, because I agree with you, transformation in the past has been sometimes disruptive in a way that's fascinatingly fast. You just mentioned 22 years that animal power was replaced, I trust your figure, I'm very happy with that. What I think that lies behind the statement, in terms of how fast it's happening, it's not—and I'm happy to go the other way, I said, it's not exactly the point that we need to make, but, if you want to speak to this particular aspect—I think what needs to be appreciated is that the digital technology behind running all of the other transformations. So, it's not just like the examples you've provided, which are interesting and viable, are examples of a particular technology or change or transformation, coming and changing that particular block.

With digital technologies, what's happening is a bit more like electricity. So, deciding how fast the electricity changed the world, well, it's still changing it so to speak. Because computers are running on electricity, so, you may actually say, “You know what, it's just about electricity anyway.” So, I guess that's what people, myself included, say is unprecedented is the scope of the transformation of the digital, how it percolates any other transformation at once. So, if you see something really changing quickly, say, for example, the number of people who can now actually fly, as opposed to when I was a kid, because when I was a kid it was unaffordable, so a train it was all the time, well that is thanks to all of these digital technologies.

But let me go back to the point that, I don't think that it's particularly important. I'm perfectly happy to be wrong with this, in case, because I think that what is important in terms of the jobs: are we facing a world without jobs or where there's a shortage? I don't think so. I don't believe—even independently, or whether you like or do not like or agree or disagree with the view that the pace is extraordinary—that the disruption is going to be changing the job market. Okay, fine, maybe that is unquestionable, but I don't buy the picture that we are, as it were, having our jobs stolen by robots and AI, and digital systems.

I think that that is based on, if I may say so, two mistaken assumptions, and I could quickly go through them if you don't mind. One is that work is a finite quantity, like a pie, so if someone takes a slice, there's less for me, that's not true. And you get that immediately if you start thinking of the work that you have, at home like cleaning the house. Is there a finite amount? Never. I mean, you just stop at some point, because that's all the time you have. The other thing is that, there is no threshold of economic viability of a job, which is also false. What is economically viable today in terms of a service offered, or a good on sale etcetera, is extraordinarily lower than it was just a decade ago, meaning that the lower that threshold, the more some kind of jobs become interesting, possible, feasible. So, if you put these two things together, there is no finite amount of work, and there is a threshold, that is constantly being lowered by technology in terms of what is economically viable, well, together they show that the arrival of AI, or the systems we'll have in the future, will have, certainly, an impact in re-shaping the job market, but not in destroying jobs full stop. They would reshape it as generating things that we've never seen before, so we better be ready for that.

I'm still kind of wrestling with the demarcation of these four revolutions, and sometimes in the quest to find a common thread, one maybe sees them in a sense as rhyming, if you will, more. But if you take Copernicus, we found where he said—and I'm not diminishing anything Copernicus said obviously—but, when we started pointing telescopes out at the sky, and we would see all of these galaxies, we say every single galaxy was moving away from us. And that sure looks like we're in the center. Of course, we're not, the universe is presumably expanding, and everything's moving away from everything else. But in an infinite universe, the idea that we're not the center or we are the center, actually doesn't mean anything at all.

And, we are special in the sense that there's life on this planet, and life may be a really rare thing. I mean, we don't know how it came about, and that is something truly exceptional about this. And then if you come to Darwin and you talk about the exceptionality of humans, you know, you can say, well, we're nothing special. But I think we are, I think the amount of DNA that is different between you and a chimp is half a percent—it’s a few lines of code, in computer speak—and yet we are conscious and we have language… I mean to say we are no different than the animals, is to, kind of, diminish that we've invented human rights, and written Harry Potter, you know all of that stuff that we've done. We are kind of special—we live on a very special planet, and we are a very special creatures.

On GigaOm we have this test: will a computer replace you? And it's ten questions, and it asks things like: do you have to move around from room to room, or from building to building in your job? And do you have to have an emotional connection with people, and, how many people do it? Like, if very few people do a job, it's going to be very hard to replace. Like the person that restored the vintage chimney in my house, a robot would never do that, there's no business case to build one. And going through these results, I find very few jobs that a machine could do as opposed to a person. I think my electrician, my plumber, they're just an infinitude—is it not maybe that the information, while transformative, isn't quite so, "Well, it's going to change everything"?

Well, let me put it this way, and I'm not trying to be nice, I agree with everything you've said, and that's where we start from. The next step to me is understanding exactly why this is correct. So, I'm not saying, that there's something that you've said that's not quite right, no, I'm with you, yes, exactly, but why? Why are we still exceptional despite the fact that we're not, you know, that medieval picture where we’re the center of the universe, or that other picture where we're totally separated, but we need to think in terms of what was before those four revolutions. In what exact sense are we still exceptional, still the most amazing thing, as far as we know, that has ever happened, in this universe, full stop. I believe that, I believe that that's the bottom line, but I want to understand philosophically, why? Why do we think so? What justifies this?

Now here is the point that I would like to make. I think the four revolutions that we have undergone, they have taught us a lesson, a bit of a humility lesson. As I said, it's not like growing up, it's not like this silly teenager who, shall we say, a simple-minded thinks that the party is great, because it's about him or her. No, the party's great because I organized the party for you. Well, I'm still special, as in I organized the party for you, I'm, say, the parent who organized the party for you—it's your party and it is my special nature and my exceptional capacity. So, for example, taking care of the world—no animal, no nothing, no star, no river, no sea, no mountain, would ever take care of the world, that's us humans, amazing, extraordinary.

The ability to have a mental life, well that's us, is it a blessing? Depends on the kind of thoughts you have, and the kind of nightmares you may be suffering from, but it is amazing, it is exceptional. I like to synthesize all this—and forgive me for the philosophical moment, a bit too simple minded—I like to think in terms of us as in a different context, a beautiful glitch in the system. We're a bit of a glitch, because we are really outliers. You look at the world, and honestly, we shouldn't be here, that's not the way the world develops, an extraordinary planet, with extraordinary life, with extraordinary humans on it, that's weird, that's really anomalous. For some people, that glitch is a plan, and I am happy to open that road to other people, of a religious inclination, that's fine too. For people like me, it is an amazing glitch, a stroke of luck, the lottery ticket that you really want, it is extraordinary. It is something beautiful and not to be found anywhere else, that is definitely the case.

At this point, what does it mean to be a glitch? Well, what makes us special, say, compared to my dog, is—among many other things—my ability to think on what is not the case, what is not here now. For example, my ability to worry about my pension, or what I did last year, or what if my dog is going to find a way of getting out of the garden, which is not happening, but it might. So this openness that we have, with respect to what is not the case, what might be the case, the openness to the possible, well that to me, is like a special measure of being, shall we say, a bit of a hole in the universe? A beautiful glitch that is a hole in the fabric of this cosmos of ours. That is definitely special. So just to synthesize, I’m with you, the reasons why I think may or may not be the same.

So, when you say, “why,” why we're here and why are we a glitch and all of that, I assume you actually mean “how.” Because a “why,” implies there’s a reason, and that is kind of back to that plan thing. But a “how,” how is it that we happened to come about, is a different question. So are you really asking how it is that we're a glitch?

No, I mean, we normally ask why, and then we retract and say, “There is no reason.” So you would say, "Why did I win the lottery?" and that's a very reasonable question to ask. And then someone tells you, "You got lucky, that's how you won the lottery." And then we're back to your question, the “how.” So someone could explain, “Why did the engine stop?” Well that's “why,” because you work in such and such a way, and then you move to the “how.” So I don't draw a distinction clear cut between the why and the how, explaining sometimes the how is a way of answering the why.

So, as we were talking before the show, I have a similarly titled book, The Fourth Age, and the last chapter of that book is called "The Fifth Age," where I kind of speculate what's going to be the next thing, so what is the fifth revolution?

I'm not sure that there's going to be a fifth one, but if there were a fifth one, it would be the discovery that there's another form of life in the universe. That would really change our perspective broadly about how special we are. Because we're still holding that view, deep down we're still thinking, “Okay, whatever happens, we're still the only mental life out here, you know, if you exclude things just on the other side of the line of death.”

You know, in terms of this kind of universe that we perceive, we still think, “Look, as far as we know, we're the only ones.” The moment we were to receive a radio signal from a far galaxy, that would really change our culture profoundly.

I agree. And if you find life just one time, that's not related to us, like Mars in theory could be, if you ever find life one time, just once, then it's everywhere, right? Because all of the numbers in the Drake equation, all of a sudden… That means there's a space battle going on right now somewhere in the galaxy.

Exactly, so what I mean is really, not just finding that some planet thanks to water etcetera, some bugs developed, maybe, that would be really exciting, Mars and stuff like that, super exciting, but mental life, as I said, a radio signal saying, "Hello, how are you?" That would be really super spooky and it would change our mental perspective of our uniqueness, once more, so you ask for a fifth revolution, I think that would be it.

You know, I think what it would do, also, I like to think what it would do is end every national conflict on this planet all at once. The desire to cut us up into “us and them,” all of a sudden, the cosmic us and them, is, we're all humans, and all of our destinies are bound up into one fate, the fate of the planet. Jack Kennedy said that our basic common connection is that we all breathe the same air, we all cherish the lives of our children and we are all mortal, and I think that that's really what that revelation would kind of unite us all, we're all the same thing.

I so much agree with you that I wish some smart politician out there were to not fake some signal from some far, far galaxy, and just to make sure that we stop bickering so much on this planet. So even if it were not real, I think, I wish it were, that someone could invent something and even a forgery would do much good to our current politics.

You know Reagan asked Gorbachev, he said, "If aliens invaded Earth, would we stop the Cold War?" And Gorbachev said yes.

Well, it has been a fascinating chat. You're an incredibly interesting guy and I've had a lot of fun. Your book, again, is called The Fourth Revolution, and I assume that can be had at wherever fine books are sold, but if people want to follow you, kind of your musings on an ongoing basis, do you Tweet do you blog, what do you do?

Well, since I'm a professor at the Oxford Internet Institute at the University of Oxford, we do a lot of social media, so you can follow me on Facebook, Floridi (my surname), you'll find me, you can follow me on Twitter @floridi, again, you're very welcome, or simply on the usual mass media and Amazon and so on.

Well wonderful, thank you for being on the show.

Thank you.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.