In this episode Byron and Konstantinos discuss what it means to be human, how technology has changed us in the far and recent past and how AI could shape our future.
Konstantinos holds a PhD in Engineering and Physics from the University of Stuttgart, as well as being the managing director at the IEEE standards association.
Byron Reese: This is Voices in AI, brought to you by GigaOm, and I'm Byron Reese. Today our guest is Konstantinos Karachalios. He is the Managing Director at the IEEE Standards Association, and he holds a PhD in Engineering and Physics from the University of Stuttgart. Welcome to the show.
Konstantinos Krachalios: Thank you for inviting me.
So we were just chatting before the show about 'what does artificial intelligence mean to you?' You asked me that and it's interesting, because that's usually my first question: What is artificial intelligence, why is it artificial and feel free to talk about what intelligence is.
Yes, and first of all we see really a kind of mega-wave around the 'so-called' artificial intelligence—it started two years ago. There seems to be a hype around it, and it would be good to distinguish what is marketing, what is real, and what is propaganda—what are dreams what are nightmares, and so on. I'm a systems engineer, so I prefer to take a systems approach, and I prefer to talk about, let's say, 'intelligent systems,' which can be autonomous or not, and so on. The big question is a compromise because the big question is: 'what is intelligence?' because nobody knows what is intelligence, and the definitions vary very widely.
I myself try to understand what is human intelligence at least, or what are some expressions of human intelligence, and I gave a certain answer to this question when I was invited in front of the House of the Lords testimony. Just to make it brief, I'm not a supporter of the hype around artificial intelligence, also I'm not even supporting the term itself. I find it obfuscates more than it reveals, and so I think we need to re-frame this dialogue, and it takes also away from human agency. So, I can make a critique to this and also I have a certain proposal.
Well start with your critique If you think the term is either meaningless or bad, why? What are you proposing as an alternative way of thinking?
Very briefly because we can talk really for one or two hours about this: My critique is that the whole of this terminology is associated also with a perception of humans and of our intelligence, which is quite mechanical. That means there is a whole school of thinking, there are many supporters there, who believe that humans are just better data processing machines.
Well let's explore that because I think that is the crux of the issue, so you believe that humans are not machines?
Apparently not. It's not only we're not machines, I think, because evidently we're not machines, but we're biological, and machines are perhaps mechanical although now the boundary has blurred because of biological machines and so on.
You certainly know the thought experiment that says, if you take what a neuron does and build an artificial one and then you put enough of them together, you could eventually build something that functions like the brain. Then wouldn't it have a mind and wouldn't it be intelligent, and isn't that what the human brain initiative in Europe is trying to do?
This is weird, all this you have said starts with a reductionist assumption about the human—that our brain is just a very good computer. It ignores really the sources of our intelligence, which are really not all in our brain. Our intelligence has really several other sources. We cannot reduce it to just the synapses in the neurons and so on, and of course, nobody can prove this or another thing. I just want to make clear here that the reductionist assumption about humanity is also a religious approach to humanity, but a reductionist religion.
And the problem is that people who support this, they believe it is scientific, and this, I do not accept. This is really a religion, and a reductionist one, and this has consequences about how we treat humans, and this is serious. So if we continue propagating a language which reduces humanity, it will have political and social consequences, and I think we should resist this and I think the best way to express this is an essay by Joichi Ito with the title which says “Resist Reduction.” And I would really suggest that people read this essay because it explains a lot that I'm not able to explain here because of time.
So you're maintaining that if you adopt this, what you're calling a "religious view," a "reductionist view" of humanity, that in a way that can go to undermine human rights and the fact that there is something different about humans that is beyond purely humanistic.
For instance I was in an AI conference of a UN organization which brought all other UN organizations with technology together. It was two years ago, and there they were celebrating a humanoid, which was pretending to be a human. The people were celebrating this and somebody there asked this question to the inventor of this thing: "What do you intend to do with this?" And this person spoke publicly for five minutes and could not answer the question and then he said, "You know, I think we're doing it because if we don't do it, others were going to do it, it is better we are the first."
I find this a very cynical approach, a very dangerous one and nihilistic. These people with this mentality, we celebrate them as heroes. I think this is too much. We should stop doing this anymore, we should resist this mentality, and this ideology. I believe we make machine a citizen, you treat your citizens like machines, then we’re not going very far as humanity. I think this is a very dangerous path.
You may know about a man named Weizenbaum in the 1960s who wrote a program called ELIZA, which was a simple chatbot. When he saw that people who knew it was a robot, knew it was a computer, were still kind of pouring their heart out, he turned against AI. He said that when the machine says "I understand," that is just a lie, there is no "I" and there is nothing that understands anything.
It is a manipulation.
So I assume the theory is that there are all kinds of repetitive tasks that people don't want to do, a toll booth operator for example, something that a machine could do, but we don't want to put people in jobs that machines can do, because that's dehumanizing. So the thesis is that there are certain jobs that people don't want to do, but that the people who interact with them want to have a humanlike experience. That's the theory. So that maybe at least when I go up to the toll booth, something humanoid looks at me, smiles, and all of that, but you feel that's fundamentally manipulative, and it de-values humanity?
First of all, I'm very much in favor of using technology really to make our lives better. Because, it has really helped prolong our lives, and I very much like the statement of Buckminster Fuller back in the ‘70s, when he said that "technology has brought us in a point in time of history, where we can really fulfill, we can satisfy all the basic needs of humanity, without having to go to war against each other for this." So, war is obsolete, and that means, we the technologists hope to help humanity to have a choice to satisfy our basic needs without having to go to war. Instead technology's used to go to war and to dominate. It's not just technology, it is the political system in which it is embedded.
I believe that there are many things we can use technology [for] to improve our lives, for instance, pattern recognition. We can identify an exo-planet out there, out of these millions of signals, that our satellites are receiving, a human would not be able to do it in 1,000 years, and the systems can do it within an hour and so on. This is fantastic. Also pattern recognition for tumors and so on, can hugely help, but at the source it is us. It is us using them to enhance our capacity of recognizing things. It is a kind of extending of our intelligence by using all these fantastic tools, but we should not really follow the path of constructing a fictive ontology, that may come against us from outside and take our position, and replace us.
Again here is the question about job elimination, this automation problem. The key I believe is there is a difference between automated systems and intelligent systems, because automated systems can be very dumb, they're just doing the same work again and again. Intelligent systems are something different, and [they] pose some very different challenges, also for the organization of work, because if you have an intelligent system linked with a critical infrastructure, be it the smart grid or the safety of a city or military surveillance equipment and so on, then you cannot leave them alone, because they can cause a lot of havoc if they go wrong.
So you need people there and these people cannot be isolated, waiting alone in [their] bunker and trying to avert the nuclear war, so you need teams of people who understand how the systems work, who understand when they can go wrong—say to them, you know, we made you and you are wrong, we don't trust you so, we are not going to follow what you're saying, because if we lose this agency, we are done as humanity, very fast.
So, clarify something if you would please, do you believe that we can use computers to duplicate human capability a different way, but that that isn't really intelligence? Or do you believe that machine learning and the techniques we have at our disposal are never going to be creative and inspirational and all of the rest? Do you object, do you say that it's like a cargo cult, that it cannot be done, we cannot build artificial minds, or are you saying we can build them, it's just a mistake to think that that's what we are?
Okay, so we have to go back to where we started our discussion, what is intelligence? Because we can use tools to extend our intelligence, but these tools can never be intelligent themselves. It is us who have the intelligence. The tools are just means for us to be faster in our recognition, to be able to make faster decisions, become more accurate. But the systems themselves can never be intelligent, because they don't understand the context of what is intelligence.
This is really the approach that is human and whether we are just machines that can be caught up by other machines, to become more and more perfect, and I would give you just an example. There are several levels of intelligence. If you have a chicken eating the seeds from the ground, it does an intelligent operation, you see it picks one seed, then the other, you don't exactly understand why it is picking up one there and the other one there, it is a pattern of optimizing its movements. It's the intelligence of the chicken to pick the seeds from the ground, and we cannot understand this, and the computer would take a lot of time to understand this.
Now this is, let's say the most very basic level of intelligence, and now let's see, if you have a primate, let's say chimpanzee who can solve more complicated problems and use some devices and so on, this is much more sophisticated. This is a higher level of intelligence, so what is human intelligence? How would you really somehow differentiate what is human intelligence compared to a primate? Or to a higher animal that’s really very sophisticated, which has emotions and so on?
I tell you what I believe: We humans, we made a really interesting step in our evolution, and I have heard other individuals today who talk about emergence. What is emergence? There is one thing that emerges there, there are two things that emerge, and they distinguish us from the animals I believe. The one thing is that we did not get better in optimizing solutions to problems, because the chicken was optimized to solve this particular problem, and there are also other animals which may be much more optimal than we [are] in understanding where there is a danger and running away than we humans.
So it is not our capacity to solve problems in an optimal way that distinguishes us. There are animals that are better than we are. I think what is unique to humans, not to all of them, but to many of us at least, is the capacity to pose dilemmas, not problem solving, dilemmas, which is a specific category of problem. This has really no necessity nature, to pose dilemmas, one could say it is even counter logical to survival, because a dilemma has only bad outcomes for the one who possess it, you lose as a person. But the community, they win. So this is extremely interesting, no animals have really the capacity to pose dilemmas, and there is no naturalism for doing it. This is an emergence and inexplicable in my... we cannot explain it out of the conditions of which it happened, this is really a conundrum.
So I do want to come back to emergence here, but, before we get there, what do you mean by "we pose dilemmas?" Give me an example of one of those, please.
‘Dilemma’ is a problem which has only bad outcomes for the one who poses it, and you need the courage of your heart to pose this dilemma. You see an injustice going on around you, most people accommodate, they don't want to see. They say, "okay it happens, just be opportunistic, it doesn't affect me." But there are people who say, "No, no, we cannot live with this, we have to take a position." But if you take a position, I may be killed, or my family may be persecuted and so on, and all the human rights or the political processes. There are huge dilemmas, if people all suffered, the people who pose the dilemma, but this is the only way that humanity makes progress, by people who really put their or our interest at the second level and they get persecuted, killed or disadvantaged and so on, why are they doing this? Why are we doing this as humans? So this is very interesting and the intelligence, the human intelligence is the intelligence to pose these dilemmas in a way that becomes actionable and they may lead to a new equilibrium afterwards.
So, I read something recently that suggested that rats can think about thinking. The experiment went like this: you give a rat a puzzle, and if the rat solves the puzzle, it gets a big reward; if he tries and fails, he gets nothing. But if he looks at the puzzle and doesn't try, he gets a little something, and that according to this one thing, rats learn this, and they would stare at the puzzle, and if it was really hard they would think "I can't do that," and they would take that little reward for not even trying. Or they would say, "I can totally nail that one," and they would try it and get the reward. Assuming that that really happened and that it's repeatable and all that, isn't that kind of a dilemma? Isn't the rat thinking, well, on the one hand I can try this, but if I fail, I'm getting nothing.
This is not a dilemma. This is a problem, because he has nothing to lose by trying or not trying, he can win by trying, but okay, he doesn't endanger himself.
I see, so is all of that unique to humans, or do you think there are animals that act altruistically for their pack or group?
They do, there are, there are yes, there are. But, let's say for me, there is also if we talk about emergence, then another phenomenon which is extremely interesting is the emergence of the political process. The political process is not what we regard today when we say politics—this is contrary to [that]. The political process is, let's say, to create a space where dissent is possible. Dissenting to authority, and without being eliminated for doing so, and this can create, this is the invention of political autonomy.
Why did humans invent political autonomy? We could be just herd animals, functioning by consensus, or by fear or whatever. And we invented this thing, and there is a lot of intelligence there. I don't believe that the computer can understand this within a political dilemma because most of the time, it has to do with self-sacrifice. If you think about people who fought for human rights and so on, they didn't have an easy life, a lot of them were assassinated or killed. We forget it, what people have paid for us to be here and have this dialogue now between me and you, and this is just the fragility of democracy, because people forget. They think it is given. It is not given, we have to keep it upright with energy.
So, to come back to our dilemma, do these technologies, the internet, the so-called artificial intelligence, are they promoting our space of self-determination and political autonomy or are they using it? For me this is a fundamental question, and this is a question that goes far beyond privacy, because privacy is just about ‘me.’ But it is privacy, as something fundamental for the function of democracy, of the political system. This is the idea that I mentioned if these systems that we have put in place, these platforms and networks are reducing our space for self-determination, political autonomy, they are betraying us, then we have a problem. And we have to see how we can construct systems that will not do this and will do the right thing. This is a very concrete question, very concrete demand for the designers of the systems, and the users of the systems, and their rules are on these systems.
But with regards to the internet, don't you think it actually does help… [as] a communication platform? It allows you to communicate with other people that communicate with you, and it's an information platform. It gives you access to everything in the world and you communicate with everybody in the world. Don't you think that does empower people more than it can be used against them?
This is really the question and if we could answer this then we'd be in a much better place. The question was posed first by Bertolt Brecht in the ‘20s last century where he wrote an essay, a critique about the radio, which said that the radio is not a very good technology, it will not change the political situation. Because there's “one to many” speaking, it will just reinforce the voice of the one who speaks to many. It will reinforce really strong power, and this is what happened as a matter of fact immediately.
But just to jump in there for a second, you would have said the same thing about Gutenberg right?
No, because Gutenberg was propagating knowledge, but the radio was really making the voice of the one who speaks louder. And then he said that the real revolutionary technology would be a ‘many-to-many’ radio. He made some very interesting vocabulary there, the many-to-many radio is a system that allows many to speak to many without a central authority, this is the internet. This is the promise of the internet: it could be a revolutionary technology because it could empower the many to connect to many, and to create the networks and so on, to diffuse this central power. But what we see—something that happens at the same time—it gives rise maybe to monopolistic situations, where they create platforms which control a lot of the data and of our digital identities, and it is a major flaw in the construction of the internet, and Tim Berners-Lee considered this very clearly. Also David Clark has maintained we screwed it up, so the two protagonists of the internet there, they conceived that something went really awfully wrong.
Of course there are cases where this networking is used really for political emancipation, there are some very nice examples. If you see how Taiwan made its democratic transition, the networks played a massive role: they mobilized their population, the activists and so on. This created an entirely different place, so it is possible, the potential is there, but this is almost an exception. In many other cases it goes really in the other way: Manipulation and manipulating people and using our digital identities to totally different things than we were hoping it would be doing. So I think without a major effort, that this promise of the ‘many-to-many’ networks as was envisioned by Brecht may not play out.
Although I just have a hard time wrapping my head around that, because if I go back to the ‘60s, you only had three news channels to choose from, and you had three corporations that gave you all television news. You had one newspaper to choose from, your local paper, maybe two. There were half a dozen major news magazines you could subscribe to. So it seems in the past you had much more concentration of control of the communications to one to many. Like I said way back in the day with Gutenberg, one person with the press could write whatever they wanted and make 100,000 of them and you know Thomas Paine's Common Sense helped spark a revolution and it does amplify one voice. So help me understand why you think today is somehow more concentrated with a million news channels and a million of everything than it was...
I didn't speak about concentration. I speak about the loss of data agency, loss of identity agency and how manipulation takes place today, is really unprecedented.
And also method, because you don't feel it anymore, the ultimate manipulation is to not understand you are a slave. Slaves in previous centuries had an iron board linked to their legs, they could not run away. Now we think we are free, but, I mean, I make a very clear definition: For me a slave is a person who does not have agency over his or her identity, other people define who you are. I think we're there. It is not so bad for me because I lived most of my life outside this, but for my kids and the next generation, I think, before we really screw it up, we must make a very conscious effort that they do not become perennial slaves. And without even understanding they are there, because then you cannot even revolt against it, if you do not even realize where you are. I think we don't want to go to back to the medieval ages with high-tech and with all this glamour of high-tech.
So what would that look like—in an ideal world—what would you hope changes?
I would like the networks to give us the possibility to engage with a network in a way that we engage in our normal life, that we can conceal and reveal aspects of our personality which are important to us. Otherwise we cannot be social/political actors. You've got a transparent machine/system that knows everything about us, and extrapolates into the future with algorithms what we may even be doing in the future, then we are in a panopticon, the theory of the panopticon of Foucault, which is really all embracing, and covers the past and the future and our presence, and we're prisoners at the end.
The question is, who is at the center of the panopticon? Who controls data and so on? It is not the person. I am not really a person who thinks about these theories of people behind the scene and so on, this is really a machine which nobody controls. Let's say it becomes autonomous, this is not what we want. Just to give you an example, from the American context. What happened to the very decent person which was General Petraeus, who was the head of the machine, the surveillance machine? He was caught himself in the machine for a totally different story, he got purged because of this. So if you cannot even control your personal life, what you make public and what not, how can you be a social/political actor?
The machine can anytime eliminate you, and we're there, this is not the future. This is the past already, and so the question is, “Is there a possibility to use this new technology we have, can we build intelligent agents, which accompany us as we enter the networks, and keep the necessary information, which is necessary for the transaction we are doing, if we are going to buy a pair of shoes?” I don't want you to know where I am, what is my whole history, and with whom I'm married and my birthdate and so on, where I live, I just want to buy shoes. And every time I go onto the internet and get to some browsers, and I see it understands where I am, who I am and links me to other people there. I don't want this, and the internet, we screwed it up with these browsers, this is wrong, this is spying on our lives. The question is can we do it differently and what can AI or so-called AI do to help us gain some agency? This is possible, it's not difficult.
Let's return to the...
And sorry, [but] to finish this, of course we need some foundational standards for the internet, in the protocols, which do not make us transparent, we don't want to be transparent, we're humans, and we have decency, means we're not transparent and if you have people there who say, well, if you want to hide something, you'd better not do it. These people are dangerous despots, they should not let this go uncommented, because these people hide themselves and they hide their private life from everyone else, but they want us to be transparent. Really? We don't want this new tyranny, so this is a new political system that is emerging through technology. At least with the technologies, we should say, is there anything we can do? What we are doing has an impact, the social and the political impact and we cannot ignore it anymore.
It's hard though, to see we're at this point where our lives are evermore transparent, and people seem to be more intolerant of personal failings and so we see so many more of them, and the outrage goes up. I hope maybe at some point, people will realize that when you hear about "somebody did something," you think to yourself, well, I've probably done something like that too, and that... you moderate it. But we're not at that point right now, but it's hard to see how we get there. It seems like the world you're describing would require not a change in legislation as much as a change in human nature, which is a lot harder.
No, no, no, no, I want to preserve human nature, because what is changing is the demand on human nature, it is we're made transparent. And this is a change of human nature, because humans are intransparent. If you were transparent, then you could read on my front what I'm thinking, [but] this is not possible.
Well, excuse the interruption, but I read something very compelling, actually it was another guest on my show, who said that, all we're doing is we're turning to the 19th Century, the 18th Century, where you grew up in a village and it had 300 people, and you knew every single thing about everybody, and that what's new is the industrial revolution and the anonymity that it gave when you moved to the city. You didn't know anybody, they didn't know you and you could do whatever you wanted. It was Jared [Ficklin] who argued that we're just returning to an actual societal norm where we live in tight communities, we know everything about each other, we know everybody's business, and that's just a normal thing, that's actually the normal state.
This is an interesting position. I would say that, and I'm not the only one who says this, even under such conditions, where people live together, you don't know everything about everybody. You know a lot, but you don't know everything, so, I think we must preserve the agency to conceal and reveal, depending on the aspect of personality. If you're with somebody you love you show more, if you're somebody from whom you just want to buy shoes, then you reveal the minimum. But you cannot be, that we can imagine everything about us, all the time, to people we don't know.
Right. He says...
This has never been there, in human history, so we have created a panopticon, which goes back in time and also prolongs into the future. What is the political implication of this panopticon, is that people who are at the center, nobody knows.
Right, and just to close up what he speculated was that we would return to an era like the Victorian times, where you knew all the bad stuff about everybody, but common manners taught you to never mention it.
I want to come back to the topic of emergence, because you're a systems person, and you've mentioned emergence a couple of times. So for the benefit of the listener, emergence, and correct me if I'm wrong, emergence is a phenomenon, where a system takes on certain attributes, that none of its components have. For instance, you have a sense of humor, but none of your cells has a sense of humor. Where did that come from? There are two kinds of emergence: There's weak emergence, which says you could study hydrogen all your life, and you could study oxygen, and it would never occur to you that if you put them together, you got water and it would be wet, but when it happens, you can figure out, "oh I see, okay it bonded that way and this and that..." and there's a link between the emergent behavior and the components.
There's another theory—highly controversial—that posits something called strong emergence, where you could study it all day long and there is no link between the attributes of the components and the emergent behavior. Some people believe human consciousness is an example, the only example perhaps, of strong emergence. Did I get that largely correct? And do you think human intelligence is emergent, and if so, do you believe that it's strong emergence or weak emergence?
I think it is strong emergence, and again with different words, it is something emerges under conditions which you cannot explain what happens. That means if you can explain, then it is perhaps explainable, then it is a weak emergence. If you say okay if you can reconstruct and so on, but if it remains always an open question, then you cannot, let's say, take back to the initial conditions, the boundary conditions, and explain what happens.
This is of course a very strong challenge to materialists, who believe everything should be explainable, because matter is the source of all things, it explains everything. So, it is just a matter of complexity, and if we would have tools and enough time, we would be able to explain. I believe that this is not the case. This is again not a scientific argument, but the other source is not scientific either, and there are some examples.
I gave you an example [about] the capacity of humans to, let's say the emergence of the ambition to go beyond power, which is a political process where you have bigger space where people can challenge authority and survive, political autonomy and so on. This is inexplicable why it happens, and why and how and so on, nobody can really explain it, because there is no natural analogy. You'll find this nowhere, other than animals and so on, and also the capacity to pose these type of dilemmas. I think this is emergent because you cannot explain it, you cannot explain why everybody else does not do it and there's one person who does, and I don't think you can.
At the end, I think we humans, we are a mystery, we are unexplainable, in good and bad, and of course this goes against the materialistic approach to humanity, and I'm not a religious person. I do not agree with any religions which are around, but I believe we should not reduce humanity beyond what is necessary. We should accept that we are a mystery, we cannot understand ourselves, and we should really respect each other in this mystery, and try to help each other to really get some insight, because it can be very rewarding. This I would call civilization, and all this discussion about to try really to emulate humanity through machines to the computers, I find this deeply disturbing, because it ignores the dimension of humanity which is really worth regarding. It takes our attention and energy away from what is worth looking at, and as such is detrimental in my opinion.
Do you believe strong emergence is a common thing or are we...?
It is rare.
Right, like humans, and do you believe that machines could, sufficiently complicated machines could in theory?
Fascinating, you're so confident in that. Explain why.
Because it is a matter of imminence on transcendence. Emergence has to do with transcendence, because emergence means that something that cannot be explained by the conditions under which it was done. If it can be explained from the conditions, it is imminence, it was there, the hydrogen and the oxygen, it was there, you can explain it, it's imminent. But the decision of a human to sacrifice themself for something bigger, this is not imminent, we don't know where it comes from, and we have to accept it. We have really to respect it and accept it and we cannot put this phenomena under the same category, this is really deeply insulting to humanity.
James Lovelock put forward something called the Gaia Hypothesis, where he said that Earth seems to maintain itself in a state conducive to life and it holds all kinds of things in equilibrium, and do you believe that it's possible that systems like that may exhibit some strong emergence, that something like the Earth could have some consciousness or intelligence about it?
Very difficult question, I know the theory of James Lovelock and I like it, and I like also the Gaia. I'm Greek, Gaia means "the earth," giving a kind of personality to... I find this very fascinating, but I cannot answer the question. I mean the question is whether we humans are an exception, and if so, why and how? And I tend to believe we're an exception, and again I'm not religious, I'm not a Christian or whatever, but there is a kind of, and we don't know where it comes from, the capacity to sense a meaning, where no meaning can be found. It is impossible to find the meaning, because the meaning is somewhere else we don't see. It is impossible to find it here, and still longing for a meaning in this life and this universe, I mean where does this come from?
This is the torture, why are we really exposed to this torture as beings? Nobody can answer this question. Is it a mistake or is there a deeper longing in us, that drives us there, and where does it take us to, and how do we have to behave with each other if we take this seriously? And I think these are very important questions, and the question, all this discussion about technology really takes our energy away from the very fundamental questions that are very important for our dignity and humanity. How we can live in a way to help each other, to gather human experiences, where we came to live for, which is why we are here to gather human experiences, not to be replaced through machine experiences.
A few days ago, I watched the total eclipse of the moon over the Greek mountains. It was something unbelievable, it happens every 100 years and together with the Mesa contraction (ice age) every 100,000 years, what is this? What is this, for a miracle? Should we play this with virtual reality, and see it every day? So I think we should use technology more diligently and where we think it could improve our quality of life, but not replace our experiences with technology.
Well that is a fantastic place to leave this, what a fantastically interesting hour. I would love to continue the conversation another time if you're up for it, but we're out of time now. If people want to keep up with you and read your musings or what have you, how do they do that?
Well I'm not a prolific writer, unlike you, I know you are a prolific writer, you write wonderful books. I prefer to act. What I'm trying to do is to really give the opportunity to people to engage in a meaningful way. I do architecture of ecosystems. My main job is I'm the head of this standardization body, we create ecosystem standardization. But in addition to this, I think we have to create ecosystems for people to come together to think about these things and to act upon them, and again I said the technologies we are doing, should really serve our really deep longing as humans, they should make the quality of our life better, but they should not reduce us.
There is a reductionist approach by certain technologies and also by certain use of technology which we should resist. We have to come together to succeed, because we want our kids to have a fulfilled life, to flourish as human beings, and to help them find their way through life, which is not easy and it doesn't get easier by the year. And the best way to follow what we're doing, [is] we have launched a joint program together with MIT Media Lab which is the global council on extended intelligence, and the URL is globalcxi.org and there we get people together to think and act upon these things which I'm talking about here. I'm not alone and it's important to make alliances and to act, not just talk.
Well thank you very much, I appreciate your time, like I said it's been fascinating and I hope you'll come back some time.
Was a great pleasure.
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.