Episode 98 – A Conversation with Jerome Glenn

Byron speaks with futurist and CEO of the Millennium Project Jerome Glenn about the direction and perception of AI as well as the driving philosophical questions behind it.

:: ::

Guest

Jerome Glenn is the Director of the Millennium Project of the World Federation of UN Associations. He was a co-founder of the Millennium Project in 1996, and has helped coordinate the research behind 11 annual "State of the Future" reports. Mr. Glenn has 37 years of experience in futures research for the U.S. and other governments, international organizations, and private industry.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, and I'm Byron Reese. Today my guest is Jerome Glenn. He has for 23 years been the Director and CEO of the Millennium Project. It's a global proprietary think tank with 63 nodes around the world, producing the annual State of the Future since 1997, and it's a whole lot more than that as well. I think we're going to have a great half hour. Welcome to the show, Jerome.

Jerome Glenn: Thank you. It's nice to be here.

Well I don't know that I gave you the full and proper introduction to the Millennium Project, which I'm sure many of my listeners are familiar with, but just bring us all up to date, so we all know how you spend your days?

Well I spend my days reading of course all day long and responding. The Millennium Project was actually conceived back in 1988. And the idea was: the year 2000 is coming up, and we ought to do something serious as serious futurists. So what we did was get interviewed by guys like you, and I had too much to drink the night before and said something stupid. So we decided to create a global system to create global futures research reports relatively quickly. We now have 64 nodes around the world. These nodes are groups of individuals and institutions that help find people that have the best ideas in their countries, and to bring all those ideas together and assess them; do various reports on the future of ethics, and on the future of AI, the future of all kinds of different stuff.

We're not proprietary research by the way. We do some specific research contracts, like we designed the collective intelligence system for the Kuwait Oil Company, but that was so that we would get experience doing it. But we're not a regular proprietary consulting company as such. It's more like a public utility where we're looking at the whole game as much as possible, and then people draw from our stuff and our methods and content the way they like.

And so since the millennium has come and gone, unless maybe it's now the third millennium, is there a timeline focus of the group? I mean because there's a difference between asking what's life going to be like in 5 years versus 500 years right?

Right. Sure, and we don't tend to do 500 years too often. Although in 1999, we did write five alternative thousand year scenarios. The idea was that since everyone's looking back a thousand years [at] the Vikings, the rest of it, we figured we ought to at least look out a thousand years and see what... Those scenarios are actually on our website and you can take a look at them. But it normally depends on the issue. If you're looking at financial stuff, you're looking short range. Obviously if you're looking at environmental stuff, you're looking at a longer range. So we don't have a set timeline.

What we do have is a set of 15 global challenges that we update and improve insights, hopefully improve insights on, on an ongoing basis and that's much of the annual report in the State of the Future. But it's also the framework for understanding global change that we have in our online collective intelligence system.

So when you write these materials, are you writing them for the general public, for policymakers, for... is there any group in particular that's like your intended audience for your materials?

Yeah. Well like any writer, we're happy if anybody's reading our stuff. But it's more for people who are actually engaged in decision making. They're the thought leaders, they're the advisers for policy people. A lot of the corporate strategic planning folks read this stuff, use it regularly. It's also used by government foresight folks around the world and an increasing number of libraries get our stuff.

University libraries do because people in many universities are teaching sort of global strategic stuff and long range future technology sort of stuff. And so universities are using, let's say the future they use in their content, it's like a textbook. And then those that teach methods, use our future with a series of 37 different methods of looking at the future. It's the largest collection of methodology around. And so increasingly the college audience is getting involved in this, but initially it was your U.N. long term folks, your government policy people, and mostly advisors to the policy decision makers, and the corporate players.

So let's pick an issue. Our show is Voices in AI, so tell me what you think about artificial intelligence? What are some of the open questions? What are some of the things that we think might be obvious at this point?

I think one thing we're trying to get the world to get clarity on is there's a tremendous confusion between artificial narrow, general and super AI. And it's extremely annoying. Let me give you an example. In Korea, as you know, the Alpha Go beat the Go champion. And many people in Korea (I go there a lot) were going nuts because they're saying “oh my God. All these things that Elon Musk, and [Stephen] Hawkings... it's here now!” And you go “No no no no no no.”

It's something different, right? I'm with you. So let's split that up into three separate things. I'm sure all of my listeners are familiar with the distinction, but of course Narrow AI is a technology designed to do one thing. The only fears people have around it typically are related to automation, not any of these dire scenarios. Then there's general intelligence, which is a technology we don't know how to build, and that experts disagree [about] when we'll get it, [ranging] to between 5 and 500 years. And then there's a super intelligence, which is a highly theoretical kind of general intelligence that has evolved so quickly to such a state that it is as incomprehensible to us as we are to an ant.

Yeah. Well I would add two things to that.

Go ahead.

One is I think it's proper for Elon Musk and the rest of the folks to start raising red flags right now because as you point out, we don't know how long it will take to get to a general intelligence. It might be 10 years, but it may be longer. We don't know. But if we do get it, we also don't know a more important thing, and that is how long it'll take to go from general to super: where super sets its own goals without our understanding. That might never happen. It might happen almost immediately. We don't know. So it's better to panic early on this.

Well let's talk about that for a minute. So let's go with the general intelligence first. We'll start in the middle. I've had almost 100 people on this show and they're all for the most part practitioners in AI. And I ask them all, do you believe we can build a general intelligence? And then I ask “Are people machines?” Let me just ask you that to begin with: “Is our brain a machine, do you believe?”

Well my early education was philosophy. So in the philosophy world we always say: “well it depends on your definition.”

Well let me ask it differently then. Do you believe that with regards to the brain and by extension the mind, that there is nothing that goes on in them that cannot fundamentally be explained by the rules of physics?

I think that's a useful... many of the additions to the United States philosophy was what's called pragmatic philosophy. It said “I don't know what the truth is, but I know what works and what doesn't work.” I think taking what you said as a working hypothesis and pursuing that way will produce more value than just guessing. So I'm sympathetic to that, but I don't know enough truth to know the answer to that.

The idea though is that all the people who believe we can build a general intelligence -- to a person -- hold that belief not because they know how to do it, but because they begin with that simple assumption that people are machines.

And that's why I'm saying Begin with that. It’s sort of like the Gaia hypothesis. If you begin with a hypothesis, you get better science. So yes, I think that that's a good rational approach.

But the interesting thing is even though 95% of my guests say ‘yes’ to that... they believe people are machines, when I put that same question on my website to the general public, 85% of the people say ‘no, of course not. That's ridiculous.’ So there's this huge disconnect between what AI practitioners think they are and what the general public believes they are. And you say it's a useful assumption. It's not a useful assumption if it's wrong. I mean because what it is...

On the contrary, au contraire, Copernicus had wrong assumptions, but he made some useful insights.

But people by and large, when they're told by the Elon Musks of the world, “You know it's an existential threat…” When the late Stephen Hawking said it's the last thing we may be allowed to make; when people are told that over and over, it's never qualified [with] “well if we're machines that's the case” because then people are like “well that's ridiculous, we're not machines.”

So I think it is certainly used to frighten a lot of people, like the people you mentioned in Korea and all the rest. And so if the assumption is wrong, then you wouldn't want to alarm and frighten people with something that's actually impossible.

I don't quite agree. Birds fly, airplanes fly. They don't fly the same way; they're not the same thing, but they both fly. I think regardless of your assumption, if we aren't able to get the general intelligence... I don't know the truth, but I'll certainly take a bet that we will. There's no reason that increased complexity... because if you can still have cytoplasmic resonance between the "universe beyond our understanding," and our own cytoplasm brain activities that generate minds in a similar way that matter and energy with the mechanical sort of stuff would generate a behavioral mind, in the same way that the plane flies, it doesn't fly it with cytoplasm but it flies nevertheless.

So I wrote a book about this 30 years ago called Future Mind, and the thesis of the book essentially was that we would be so interlinked with technology in our body, and the external built environment will be so full of sensors and AI and general AI that the two trends would merge into a conscious technology. And the distinction between one and the other would be sort of “who cares anymore?”

In the same way that in early email, you had a telephone and a computer and you could certainly know the difference between a telephone and a computer. But in the act of e-mail you couldn't separate them. So I'm suggesting that consciousness and technology will be inseparable eventually in the future.

And the distinctions of these questions, people will say that was like the old [saying] ”How many angels dance on the head of a pin?” I don't think we'll care in the future. We'll be so beyond a lot of these questions, so merged with all this that we'll just move on. Just like when I talk to you right now, am I talking to you or am I talking to a machine? Well clearly I'm talking to a machine, but we sort of get over that when I say I'm talking with you.

So that's well and good to a point, but there really isn't an issue that if machines do independently become conscious and they do experience the world, then...

I'm saying they might not independently do it. We may... in my defense, I mean if it becomes independent, then I do get a bit worried. That's why I want to do the merger. If you can't beat them, join them as fast as possible.

Right. But I mean if we did have conscious machines, those machines they would acquire the rights of the kind similar to our own rights. I mean if they could experience the world and the pain and all of that. So what are your thoughts on consciousness?

So it's called of course the last great question that we neither know how to ask scientifically nor what the answer would look like. It's the experience of being you obviously, it's the difference between: you can feel warmth, but a computer can only measure temperature. Where do you think that comes about? Can machines achieve it, do you believe?

I think it'll achieve something that we don't understand yet, but yes, I think it will achieve something. You roll back to the early evolution of the universe, you wouldn't say “where the hell's the mind?” Some of us think that with the increased complexity of the mind, emerges from all of this. That's a perfect view I can live with. I can also live with the view that the universe is way beyond my understanding, and there may be some sort of sentience to the whole game, but I don't know. There may be some resonance between my mind and that. And people experience that sort of sensation and I'm not going to knock that either.

Well let's take the emerging hypothesis. Let's assume it's emergence. Do you believe in strong emergence? So there's two ways to think about emergence: one is how you can study oxygen for a year and hydrogen for a year and never know that if you put them together you get water, and it has this new emergent property: it's wet. But once it happens, you can look at it and say “Oh I see how that happened.” But then there's a notion of strong emergence, which is, it cannot actually be comprehended; how not just it's just beyond our comprehension but it cannot be comprehended in how the emergent property is... Because it is hard to see how matter can be alive and how a cloud of hydrogen can in the end experience anything.

But just because we don't understand something and even understand how we could understand and can't even conceive of how it... doesn't mean we won't be able to eventually.

Okay well let's back up and go down to narrow intelligence. There is of course people who think that automation (and I know you have a report about the future of work that is brand new), some people are afraid that automation is going to destroy a bunch of jobs. Some people believe that we'll have a kind of permanent Great Depression and there'll be a group of people who can't add value in this new world. Other people say “No, the story of these technologies is that they add so much productivity to people that everybody is better off and you can never run out of jobs because there's an infinite number of them, -- that any time you can add value, you've just created a job. What about you? Where do you think about all of that?

Well we wrote three alternative detailed scenarios, not simply images of the future but actual real scenarios where you sort of have cause and effect links between the future condition and the present tense, naming ranks and serial numbers and decisions and so forth, so each of them is about 30 pages of thinking on this, so to speak. And with one set of assumptions, one story comes out. With a different set of assumptions, another story comes out.

But to jump into your question: in the essence of it, a lot of this has to do with, I want to say the artists of the world. You and I, most people sort of identify themselves saying well I'm a broadcaster, I'm a writer, I'm in this, you know we identify with our work. And if governments and corporations and universities don't take this seriously, and prepare for some of this transition stuff as we talked about before, this is really complex stuff. It's hard to over simplify it.

But if we don't slowly make the adjustment to say ”Look, the analogy I used was that Abraham Maslow, you know you have your basic needs met, you know you eventually get to self actualization.” Well the world in 1980, the majority of it was in extreme poverty. Today it's under 10%, so we're having a lot of the basic needs of civilization being met, and we've got the social media stuff taking care of so much of our belonging needs and self-esteem needs. So we're moving as a species, not fully yet, but toward the idea of self actualization. And we've had economies for each of these sorts of echelons going along. We don't have it yet for self actualization.

What I'm suggesting is that if the artist community, the writers, the movies, television, slowly evolves us into how we could evolve to say “We all have a civilization where AI is not a threat, but an adjunct to making the civilization work, -- in the same way that we have our old reptilian brain making our body work, we don't worry about the heartbeat unless you've had a heart attack. But we don't worry about the bodily function. It's taken care of. And that frees up our mind, our brain to be forward looking and thinking and self actualizing. And by analogy, we're looking at a self actualizing economy slowly evolving.

But even in the best case scenario out to 2050, we had only about 3 billion people into that sort of world. And another billion still working in the normal sense that they have a job, they have a salary, because you might only need a billion people at this point. But then you still have another billion or two in informal economies, unemployment. Even in the best case, it's hard to make the transition really look great. But we can certainly move more in that direction than ignoring all this sort of stuff and then wait until we hit... because to me the big deal isn't the narrow AI knocking out truck drivers because that's not going to happen in one day. It's being phased out, there's a lot of preparations you can do for that. A lot of people have got good ideas about that. What I worry about is the shift from narrow to general because that shift, if it occurs, would have far more impact on jobs than narrow AI on a global basis.

So to be clear, if there is no general intelligence, if it's just narrow, you're not as worried about the transition?

Correct.

So I've spent a fair amount of time trying to figure out the half life of the job and I think it's 50 years. I think every 50 years we lose half of all jobs.

Well yeah that’s reasonable, maybe a little bit faster in the future.

Well I'm not sure. I'm not sure it's any faster. The OECD did a report where they actually looked at the number. I mean you know what the real problem is of course, and you don't have to go back to the Industrial Revolution to see this, you can go back to the Internet, because if you go back 25 years when the Mosaic browser was released and you said to somebody “Hey you know in 25 years you're gonna have billions of people use this thing. What's going to happen to work?” You would have said “Well there won't be any more travel agents or stockbrokers or Yellow Pages or newspapers, and you would've been right about everything, right? Everybody could call the ball on what was going to get destroyed, but nobody, literally nobody saw eBay, Etsy, Airbnb, Uber, Google, Amazon.

You'll have to do a Google search to find a book called Future Mind and you'll find all that. It's the smartphone, I didn't have the word ‘smart phones,’ I called it the TOK, the tree of knowledge.

There were inklings of it, of course. I mean even [Nikola] Tesla wrote about [how] you'll carry it in your shirt pocket and all of that. But those specific companies, I mean specific business models, took a while to emerge. No they weren't self evident in 1995. We know that because those companies and their kin did not emerge.

So the challenge always is you can see the jobs that are going to be lost, and you never can see that the jobs that are going to be created. And it gives the illusion that more jobs will be lost, and it's like well what are these people going to do? And it's like they're gonna be chort wranglers.

Yeah that's why I'm saying self actualization becomes a bigger deal. For example I am making a living out of being myself at this moment, am I at work, am I at play, am I having fun? This sort of all will slosh together. We can connect to networks worldwide as we are doing, you and I right now by doing the things we actually are curious about and are turned on by all the great billionaires, [who] when they're asked “What do you do?” they answer they follow their passions. The average person never gets around to that. But the average person might think more; the average person might very well start to get into that idea ‘Hey I can express myself through new kinds of software, new kinds of market things around the world.

So if I go to the Louvre (I want to go to the Louvre anyway) and I can say to the world “Hey I’m at the Louvre... one euro click here one dollar click here.” I'll have my little camera on and I'll walk you through the Louvre, and I'll have 30/40/50 people agree to do this. That takes care of a pretty nice lunch. I was going to go to the Louvre anyway.

When you say you're running out of work, you never run out of being you. I mean you will always grow, always be more curious more curious, the more curious you get and you keep evolving, and I see no end to that. The trick is: can you connect to enough folks that are also interested in that to help provide the basic income for you? Which of course leads to the idea of the basic income, you might hit if we get general intelligence, we might need it.

You know the essay John Maynard Keynes wrote in 1930 about life in a hundred years and he said that we’ll only have to work 15 hours because he made some predictions about GDP real GDP growing. And the interesting thing is he was right about it. Like it even grew to the high end of what he suggested.

So everybody asks the same question, which is like I have all this stuff, all this time, why am I working so hard? And the answer to the riddle of course is that if you wanted to live a 1930 life, i.e. 600 square feet, no air conditioning, no medical insurance, grow your own food, make your own clothes, don't go on vacation, you could do that in 15 hours a week. But people don't. So human wants are kind of infinite, so I want to reference back to your earlier comment that a billion people may be all that we need. But if human wants are infinite...

Yeah well there's a difference, but there's a difference between how many people do you need to make civilization work versus how many people do you need to make civilization worthwhile. Those are two different things. The worthwhile part is an expanding pool; the making it work can be a shrinking pool in the same way the analogy with the body you've got this little tiny brain back there running the system. You don't need the whole gray matter to run the system.

So you keep qualifying it, as you should, with all bets are off if we hit general intelligence. Can you construct a scenario where we don't make general intelligence that does not rely on anything supernatural or spiritual? Could you come up with a purely scientific scenario that says you can't actually build a general intelligence?

To me my understanding is that you're going to have general intelligence in a completely mechanistic view. General intelligence does not require a soul, it just requires that it goes all over the internet of things, all over the sensor networks, all over libraries, all over everything it can get their hands on to come up with an answer to a novel problem. That to me is general intelligence. That's just completely machine that doesn't necessarily need a soul.

But the primary tactic we're using for narrow AI right now is machine learning, which is a simple idea. It says you take data about the past and you study it and you make projections into the future. And by definition therefore, it only works with problems where the future is like the past. You can get it to identify cats because cats look the same tomorrow as they looked yesterday, but it is not at all clear that all things behave that way.

And so I put the question to people on the show, Do you believe that narrow AIslowly evolves into general intelligence? Or is general intelligence something completely different that we haven't even started working on it?

Good for you. Now let's talk about that. You know Ben Kurzweil right?

Yes.

Okay, I've known Ben for eons ago and he used to believe that general required a completely different strategy to develop, and it wasn't an extension of narrow, right? I think he's changed his mind lately. You know about the Singularity Net?

Sure.

Okay now he has said in there and some emails back and forth, it sort of sounded like he thought that if enough narrow developers were in collaboration and sharing information about blah blah blah blah blah blah blah blah blah, that that complexity could itself generate general, but that's new. I don't remember him ever saying that.

Well the interesting thing is that when I put that question to my guests, they're pretty much split on it, which is unusual. And I think it's also very interesting that the number of people who are working on general intelligence is actually quite tiny.

You know that Putin has said, “Whoever rules AI rules the world.” And you've got the China goal for 2030, right? Military intelligence as we both know, and research, tends to be ahead of what gets applied to the public. And the question is: can you make a generalization? Is it 5 years, 10 years, 20 years? Then of course people say “well it depends on what it is.” But in any case it's years ahead.

Now since the power structure of the world is in a deadly serious competition to get this, including organized crime, by the way, because I see overlaps between Russian organized crime getting involved in this because they have enough money to buy the best talent that money can buy. So the new arms race is this sweat where, and cyber where we're into a merger into these new forms of intelligence, and I think that they could very well be closer to this than we realize. So I think the numbers of people in the public, the private corporations correct: it is miniscule compared to the potential. But I would bet my life that folks like the NRO (National Reconnaissance Office) and the rest of these folks are up to their eyeballs in this stuff.

Right, you know it's interesting because what I have noticed is the closer you are to being a practitioner, the less close you think general intelligence is. Like I have people on the show who say, “You know, we don't have voice recognition able to tell the difference between A, H and 8, like that's where we're really at...

Face recognition we thought was going to take a while. That went pretty fast.

Yeah I mean, to be perfectly clear, we don't have a construct for how you go from... Okay I'm going to study a bunch of data about the past, and I'm going to make projections into the future, how do you go from there to a computer writing Harry Potter? Or any of the rest of it, anything that requires...

We don't have transfer learning knocked out; we don't understand how people can apply information so seamlessly; we don't know how people can get trained on sample sizes of one, how we can pull the essence of something out, even at a very young age. I mean we are so far away from that. Like Andrew Ng says, worrying about that particular technology is like he says, worrying about over population on Mars. It's like that's really where we're at.

Yeah but don't undersell the AI competition between the United States, China, Russia, the European Union. I think it's a deadly serious game, and if I were running operations in the United States, I certainly wouldn't let anybody know how far down the road we are.

You can see a lot of national security applications for Narrow Intelligence right?

Well yes, there are.

But what would you say would be a national security application that you're imagining for general intelligence?

I'd rather not talk about that, that's very serious. We're involved in an information warfare right now unfortunately, and I don't want to add one more molecule in that direction.

So I guess then, it looks like we're running out of time anyway, I would like to leave you with my big question, which is: you're this guy who's like a giant in your field, you've been doing this future study for decades, you’re steeped into it on a day to day basis; this is your bread and butter, it's like air to you. In the end, are you optimistic or pessimistic about our future?

Depends: if I'm talking to you and I think you're too pessimistic, I'll tell you how it'll work. If you're too over optimistic, I'll scare the hell out of you. It's like a bedside medicine, we don't know the truth, but you can be situational about it.

But where I would like to jump in here is on the AI and work and all that. We ended up with 93 things that should be done to make transitional work, and in international assessment, each one had a whole page of international assessment. So there's a whole slew of stuff that can be done that should make us optimistic. And a whole lot of people who are not even taking this stuff seriously. That's a good reason for pessimism.

All right, well let's leave it there. I want to thank you for an invigorating and exciting half hour. It was fascinating, and I hope you come back and tell us what the newest thing we should know about the future is.

I'd be happy to, thank you.