Episode 76: A Conversation with Rudy Rucker

In this episode Byron and Rudy discuss the future of AGI, the metaphysics involved in AGI, and delve into whether the future will be for humanity's good or ill.

:: ::

Guest

Rudy Rucker is a mathematician, a computer scientist, as well as being a writer of fiction and nonfiction, with awards for the first two of the books in his Ware Tetralogy series.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, I'm Byron Reese. Today my guest is Rudy Rucker. He is a mathematician, a computer scientist and a science fiction author. He has written books of fiction and nonfiction, and he's probably best known for his novels in the Ware Tetralogy, which consists of software, wetware, freeware and realware. The first two of those won Philip K. Dick awards. Welcome to the show, Rudy.

Rudy Rucker: It's nice to be here Byron. This seems like a very interesting series you have and I'm glad to hold forth on my thoughts about AI.

Wonderful. I always like to start with my Rorschach question which is: What is artificial intelligence? And why is it artificial?

Well a good working definition has always been the Turing test. If you have a device or program that can convince you that it's a person, then that's pretty close to being intelligent.

So it has to master conversation? It can do everything else, it can paint the Mona Lisa, it could do a million other things, but if it can't converse, it's not AI?

No those other things are also a big part of if. You'd want it to be able to write a novel, ideally, or to develop scientific theories—to do the kinds of things that we do, in an interesting way.

Well, let me try a different tack, what do you think intelligence is?

I think intelligence is to have a sort of complex interplay with what's happening around you. You don't want the old cliche that the robotic voice or the screen with capital letters on it, just not even able to use contractions, "do not help me." You want something that's flexible and playful in intelligence. I mean even in movies when you look at the actors, you often will get a sense that this person is deeply unintelligent or this person has an interesting mind. It's a richness of behavior, a sort of complexity that engages your imagination.

And do you think it's artificial? Is artificial intelligence actual intelligence or is it something that can mimic intelligence and look like intelligence, but it doesn't actually have any, there’s no one actually home?

Right, well I think the word artificial is misleading. I think as you asked me before the interview about my being friends with Stephen Wolfram, and one of Wolfram's points has been that any natural process can embody universal computation. Once you have universal computation, it seems like in principle, you might be able to get intelligent behavior emerging even if it's not programmed. So then, it's not clear that there's some bright line that separates human intelligence from the rest of the intelligence. I think when we say "artificial intelligence," what we're getting at is the idea that it would be something that we could bring into being, either by designing or probably more likely by evolving it in a laboratory setting.

So, on the Stephen Wolfram thread, his view is everything's computation and that you can't really say there's much difference between a human brain and a hurricane, because what's going on in there is essentially a giant clockwork running its program, and it's all really computational equivalence, it's all kind of the same in the end, do you ascribe to that?

Yeah I'm a convert. I wouldn't use the word ‘clockwork’ that you use because that already slips in an assumption that a computation is in some way clunky and with gears and teeth, because we can have things—

But it's deterministic, isn't it?

It's deterministic, yes, so I guess in that sense it's like clockwork.

So Stephen believes, and you hate to paraphrase something as big as like his view on science, but he believes that everything is—not a clockwork, I won't use that word—but everything is deterministic. But, even the most deterministic things, when you iterate them, become unpredictable, and they're not unpredictable inherently, like from a universal standpoint. But they're unpredictable from how finite our minds are.

They're in practice unpredictable?

Correct.

So, a lot of natural processes, like well there's like when you take Physics I, you say oh, I can predict where, if I fire an artillery shot where it's going to land, because it's going to travel along a perfect parabola and then I can just work it out on the back of an envelope in a few seconds. And then when you get into reality, well they don't actually travel on perfect parabolas, they have this odd shaped curve due to air friction, that's not linear, it depends how fast they're going. And then, you skip into saying "Well, I really would have to simulate this click."

And then when you get into saying you have to predict something by simulating the process, then the event itself is simulating itself already, and in practice, the simulation is not going to run appreciably faster than just waiting for the event to unfold, and that's the catch. We can take a natural process and it’s computational in the sense that it's deterministic, so you think well, cool, I'll just find out the rule it's using and then I'll use some math tricks and I'll predict what it's going to do.

For most processes, it turns out there aren't any quick shortcuts, that's actually all. It was worked on by Alan Turing way back when he proved that you can't effectively get extreme speed ups of universal processes. So then we're stuck with saying, maybe it's deterministic, but we can't predict it, and going slightly off on a side thread here, this question of free will always comes up, because we say well, "we're not like deterministic processes, because nobody can predict what we do." And the thing is if you get a really good AI program that's running at its top level, then you're not going to be able to predict that either. So, we kind of confuse free will with unpredictability, but actually unpredictability’s enough.

Right, but presumably free will is something different right? I mean people don't say, "I'm unpredictable, therefore I have free will." They believe that they're in some way or another, an agent of choice, that they have a self which is in some sense acting independently of, I mean frankly, the laws of physics.

Well, they like to think that.

Correct.

I think they're mistaken.

So free will is largely or is entirely an illusion in that kind of computation, and all we can hope for is unpredictability because there's no free lunch here. The only way to know how the pie is going to splatter in the clown's face is to splatter the pie in the clown's face?

Well there's a lot of metaphors kicking around there. Yeah I think unpredictability is good enough.

Well, let me then throw consciousness at you. So people say we don't know what it is, but of course we know what it is. It's the condition that you have an experience of something, a first person experience. You feel warmth, whereas a computer measures temperature, and no amount of that theory about computation allows for the concept of: “How does matter experience the world?”

Yeah that's a classic question. Philosophers use the word ‘qualia,’ where you say you can have a system that can discern what's the difference between red and blue. But they say your experience of sensing it is a qualia and it's this sort of ethereal thing that you would say, "Well I'm going to deny that to the lowly machine, they can't have a qualia." But again it's an invented concept and it's not clear why it wouldn't be in the machine.

There's a sort of little zen story that I like, that to me expresses something about how everything could have consciousness. The student asked the teacher, "Does a rock have Buddha nature?" And the teacher says, "The universal rain moistens all creatures." And what he means by that is consciousness is a sort of universal quality that pervades the universe like the rain, and it moistens all creatures. Everything has an element of consciousness in it, and an intelligent system is able to express the consciousness in a complex way that we find interesting and more like us. But I'm perfectly comfortable with saying, "a rock is conscious."

Well, let's talk about that for just a moment. It seems like you said two very different things. One you said: "Well it's just an invented concept, it's just a made up word." But the other thing you said is, "Oh no, it's a very real thing, it's just universal."

Yeah, well it's a distinction without a difference. If you're saying qualia in the sense of being something that we have, and objects don't have, then you're asserting something that in fact is not true and when we say that it's a universal quality that everything in the universe has, then  now you're saying there's no difference, so you're not really making a distinction.

Let me say just another thing along these lines. There's this word, it's not very well known, it's hylozoism, and  "hylo" means "matter" and "zoism" is "alive." And it was a philosophy that was [held by] a lot of people over the years up until the industrial age, when it went out of favor, and it's the idea that everything is alive. Also, everything is conscious. Slightly different notion, which is called "panpsychism." So pan is everything and psyche is conscious - but I just use hylozoism, it's just a cooler word. It's weird and it has a "Z" in it, and I actually wrote a science fiction novel called, Hylozoic, because I wanted to work out what it would be like if it became super evident to us that everything in the universe is conscious: an atom, a rock, a dog, a lamp, the sky.

There's a very interesting book about it that I've read several times, and I'm looking for it on my bookshelf here. I'll find it later and tell the author's name, which I can't remember. Anyway, hylozoism is the idea that everything is alive and conscious, and it's sort of a simplifying notion. If you're a materialist, you say that, "Okay nothing is conscious, except for us wonderful humans, the pillar of creation, out of all of the quintillion planets, only we have it."

And then if you say that, then "what we are, we're these..." there's a nice analogy. There's a guy called Gustav Fechner, that wrote about this in the 1800s. He said that's what he calls the "nighttime view of the world," where we're like, in this giant dark warehouse, and there's no light and there's just these sort of hostile corners, you bump into things, and we're like these few little bright fireflies, these wonderful fireflies moving around in the ugly dark nighttime universe, and that's the view that only humans have consciousness.

And then the daylight view of the universe says, “Well you turn on the lights, or the sun comes in there and the entire warehouse is illuminated and everything in there is lit up and conscious and we're part of it all, and it's almost like a hippie vision. We're in this cosmic One, we're all together here, and we're not alone. And it's unlikely that anyone will find a proof that the nighttime view of the universe or the daylight view of the universe is correct, although it's conceivable. Since I'm a science fiction writer, I like to imagine thought experiments where this could become evident. But it certainly makes you feel better to believe the daylight view of the universe.

I hear what you're saying and it's one of those things where people can listen to all of that and say "yes." But at a deep level, people believe if they stub their toe, it hurts. They accept that as a fact, and if they drop their iPhone, it does not experience pain. But you're saying it does?

Well it doesn't necessarily experience pain, because we have a more complicated system running in ourselves.

How is that any different than the people who say we're somehow special? If the iPhone isn't experiencing pain the rock isn’t experiencing pain and none of it is, then we are something different.

Okay, well you're raising some good points there. Let me switch to another line that needs to be pursued. How do we get from here to there? How do we get to the point where the iPhone would experience pain? And that's the traditional question of AI, because I can make the hylozoic explanation, and say, "That's cool, but you can't talk to a rock." And in my novel, Hylozoic, I work out that you have quantum telepathy, so you actually can talk to it, but we don't want to go there.

Let's just back up to this traditional question of AI. How are we going to get some really intelligent robots that we can hang out with? And I guess there's been various theories about how we would do it, and one was: in the 50s and 60s there was this kind of dream that we could get some really big woolly logic system that would kind of deduce everything. Every utterance you made would be sort of like proving a theorem, based on the axioms of what you'd already heard. And this turned out to be too kludgy, too unwieldy. It never really worked very well.

Then, what things have kind of switched to in modern times is the notion of the neural net, and this is where you sort of... it's like being nibbled to death by ducks, you just say "I'm just going to peck away at this problem." It's what the post office uses for reading handwritten addresses on envelopes. So we know that neural nets work, and there's this deep learning thing, they're getting more complicated structures of neural nets, they're getting better and better at doing stuff. So that's probably where we're going to get our AI from.

After all, roughly speaking, neural nets are something like what we have in our heads... we have a bunch of neurons, and they're connected and they send signals to each other, and there's certain weights on the signals. But it's all a tangle, and we have no idea what the weights are or even how to design them.

I used to teach AI at San Jose State when I was a computer science professor there for 20 years. One project we did was face recognition, and it was surprising how well it worked, it surprised me. But the thing that I found especially interesting, was that when you get the "so-called" algorithm that works, it's not like some big "aha!" it's not like some magic thing you measure the angle of the nose and the darkness of the eye. You can't put it into rules of thumb, it turns out just to be this god awful heap of about 10,000 or 100,000 real numbers between zero and one. And the way it works is, I know you know all this, but just for our listeners who aren't familiar, I'll just summarize this a little bit.

The way the neural net works is by a process called "training," and you start up with a neural net and really without any particular knowledge of how it should be, and depending how sophisticated you're going to make it, you might have one layer of neurons or two layers or three layers, and the thing that's been happening recently is there have been more and more layers. In each layer there may be 100 or 10,000 points that are called neurons and then it's like a graph, they're connected to some of the other neurons around them, and they either send out a one or they send out a zero or maybe they send out a real number, and then they base their output on the inputs that they're getting, that are coming in. Then the input layer, the top layer of the neural nets, that's the one that looks at the scene around.

So maybe you'd have a photograph and each pixel would be wired to one of the neurons, and it would be sending in a color value or a grayscale value, and then that would go and be connected to the hidden layers, and then you'll get to the output layer and then that would be outputting "oh that's Joe Schmoe, that's Dick Clarke..." whatever. And the way you train it is you give it, you have a stack of about, again 1,000 or 10,000 faces and each time you give it a face and you say, "okay, who do you think that is?" And then if it's right, you’ve kind of reinforce the weights that it has, and then if it's wrong, you change the weights that it's at, you don't do that randomly, you do something called "back propagation," and send it back to the web between the neurons.

After enough training, then it works. One of the new wrinkles now is to have—this is what Google's been using—you co-evolve something that tries to break or trick the neural net that you're teaching. So then it keeps finding examples that makes it screw up, and so you're kind of co-evolving the thing that recognizes the faces and the thing that makes faces that it can’t recognize, these things are called GANS now, and if you look online you can find some. This little space of GANS faces generates faces that look like famous people. And then there's these incredibly gnarly, borderline faces, where it's sort of between two faces, and it didn't settle down well, it just looks wonderfully hideous alien. And that's probably where AI's going to come from, that kind of system.

So, I'll wrap up the consciousness stuff, and my mind's reeling with a hundred more questions to ask you, like, "Once I die, according to that view, I'm still conscious, so you probably shouldn't bury me." And in the case of your rock, yeah I bet if you cut it in half all of a sudden you have two rocks, but if you cut a person in half you don't have two conscious entities. I think all of that just breaks down, I mean IIT is the latest incarnation of panpsychism.

I'd love to spend an hour talking about all of that, but I would just ask you one final question about consciousness which is: our entire basis of human rights, and the reason that we don't torture people, everything comes from the idea that human beings feel pain. We have a self that experiences the world, and therefore we don't abuse animals that presumably feel pain. We're very interested in the question of, "Can a 'whatever' feel?" And bacteria doesn't feel pain, you can annihilate them... but a dog does, and that entire basis of civilization and human rights, and the whole reason we have habeas corpus in the law, is because we experience the world. And when you said "rocks do as well," then at some level aren't you just undermining that entire edifice that we've built, and you're saying, "Well there's nothing about us any different than a rock," so if you're willing to put a jackhammer in a rock, you can put a jackhammer on a human?

Well no, obviously I wouldn't say that. The thing is, we're not saying that we're no better than rocks, we're saying that rocks are as good as [we are]. But then you're saying "Well why can we jackhammer a rock?" Well, the rock doesn't care, is one thing. Another thing is, because it's important to us to maintain our bodily integrity because we need to be able to reproduce and carry on, and it's a rock, [so] it doesn't matter, if it's two rocks there's no problem.

But, also the thing is, even if everything's conscious, we're in the same family, we humans, we are just ‘meat people’ you know? In the same way that you treat your relatives, generally speaking, better than the people you aren't related to, or you care more about your friends than people who aren't your friends, or you care more about your countrymen...

So we're this big family, the family of humanity, and of course we would want to take care of each other and do what we can, whether or not... consciousness doesn't really come into that, it's not really the issue. We're not really taking care of each other because we're conscious, we're doing it because we love each other and we're fellow humans.

One thing I just want to throw in here, about the rocks... people say, "Well a rock doesn't know anything,” and—I’ve told this story quite a few times—I say, "Well, look though, if you let go of it, it knows to fall to the ground." It does know something... and whether, you know, when you die, what happens to your consciousness, that's another whole thread that was in one of my very first novels. Software was about this question of what people now sometimes know as "uploading," the idea that if your personality is in some sense like a program that's stored in the wetware of your body, then it might be possible to extract your personality and put it into a machine.

That's become a very popular idea, there's been a zillion movies about it and the extropians talk about it. I don't like to brag, but actually my novel Software is the very first place that ever appeared as a platform. And I wrote this in 1979, and it was not at all an easy idea to come up with. Something about it sort of, didn't come easily. At that point I was studying a philosopher called J R Lucas, and he used to say that, "machines can't be intelligent, because of Godel’s incompleteness theorem, because your experience is, there's always something that a system can't prove. And he would say, “Well we humans, you show us the sentence that we can't prove, and we say, "oh sure, we can prove that," but his whole argument doesn't really work. Just delving into that deeper and deeper, I got into the idea that actually you know, a person is… we’re like hardware, the body on it, with software on it, the program that's running on it. Then I had the idea, why not distract the program and put it on a machine? And then that's been used a million times since then.

So, moving on off my favorite topic to AI, you made some statements a minute ago that I kind of want to go and explore a little deeper. You said neural nets are loosely or roughly modeled after how the brain works, and I've always found ‘neural net,’ the phrase, just to be marketing copy. I mean, the fact of the matter is, we don't really know how a memory is stored in the brain.

There's this worm called the nematode worm, in which we've been trying to figure out how it's 302 neurons make it a thing. We don't really know how a neuron works. It could be as complicated as a super computer, it could be doing stuff at the planck level, and somehow to say, "And now we make this really simple software that weights differently, and that's, oh by the way, how the brain works."  I want to give you either the chance to  expand on that, or clarify it, or double down on it. Do you really think neural nets really are modeled after biology and they fundamentally act the way the brain acts?

Well that's a very good point Byron, and I'm glad you raised that, because we do—mathematicians and computer scientists always get carried away with whatever they're doing, and you're making the point that a physical neuron is really nothing like some little dot on a piece of graph paper that says zero or one. Anything in biology is always just so unbelievably complicated, compared to what you thought, and there's a lot like, you know one transmitter chemical, there might be four or thirty of them, and the connections are just super gnarly, so I think we could say that the so-called neural net of today’s computer science, it's a first attempt, and to get something that really works, it's going to have to get a lot more complicated.

Those things that we're calling the neurons, approaching, getting closer to a model of an actual neuron, though again there's no reason why an artificial brain would have to be modeled on the brain that we have. There might be some completely different approach, and there's always been this sort of dream of AI people—it's less of a popular theme now—there might be some magic key if we could just find the right thing to use, the right stuff to make it out of, but nothing else has worked at all. The closest we can come now is these tinker toy neural nets that are not really modeled on the brain, but I think it's fair to say they're inspired by the brain.

Two things are going to happen: one is that the crunch of our computing ability increases, and this is the thing I mentioned that's been happening in recent years, with what they call deep learning. Just training a neural net [when] we had our dinky older computers and it was really beyond their abilities to train a complicated neural net, and the hope is that by adding more complexity, the actual models and that that we're using, it might get closer to us.

Another thing that's really just coming into play is the idea of casting aside using the silicon chips as our computational devices. This sort of connects with some of the things Wolfram says. It would be, "What if we could use bio-computation?" and then we'll be closer to Dr. Frankenstein here—getting animal tissues or human tissues and working with these things to train, and I think that's going to be very much an option. Because if you look back, I mean a lot of the things we used to use, I mean we really hardly use watches with gears in them anymore, we have chips in them, and technology changes and there's solutions [people] used to use that they don't use anymore. We don't use horses pulling carts, we have internal combustion, and then the next thing you know its electric.

I think 100 years from now, it's quite likely that the things we talk about as computers, might very well be biological entities, and not these silicon chip things that we've been using. As a science fiction writer, you think about these things, the timeline of us using the chips. It's only 50 years since we started doing it, it's not a huge amount of time, and history is very long. So I think the final solution to getting intelligent beings might very well be taking the bio- computational route, and then it gets to that issue you talked about: the rights of these things, if they're biological, then it's almost like you're having slaves, and that's not a good thing.

What do you think in terms of time horizon that we might see a general intelligence?

Well you know, for a long time now they've been saying, "within 10-20 years," and it's always longer than they thought. It just seems to be a much harder problem than people expected. There's the old joke that teaching the computer to play chess is easy, but teaching it to recognize the chess board is hard." To see the pieces is hard, and it's the things that we take for granted that are the hard things. So what if this sort of bot-type intelligence [is] mostly being used for really annoying purposes, like for making spam calls and enforcing ill-understood regulations upon the people, and making inaccurate predictions to back up harmful policy decisions?  Whether we can get to this nice really pleasantly intelligent kind of bots or robots... I would be inclined to say it'll be 100 years. I don't think it's going to be... Rome wasn't burned in a day.

So, when you put on your prognosticator hat though and you see that we increasingly computerize things, and we increasingly develop technology which multiplies human ability for good or ill, when you net everything out, are you optimistic about the future or not?

I'm optimistic, sure. Right now, as many of us probably feel, the country's not moving in a very good direction, but, President Obama once made a point that if you're trying to steer a super tanker or a luxury liner, it takes a long time to make a turn. We get worked up about the daily newspaper, but like I said, history is long, there's a lot of things still to come, there's new generations. There was always the fear that we would completely blow it, and destroy civilization with a nuclear war or with some killer virus, and that's still a possibility, but...

Right, I mean I'm an optimist as well, as anybody who reads my writings knows. But let's be candid, we almost did... I mean you can count the number of times we almost had nuclear annihilation, you had the incident where you had 2-1 the Soviets voting to launch in Cuba, and only because you had one person there who could override it… You had a Russian system that detected an incoming US invasion, and the person in charge broke protocol and didn't fire the counter offensive, and then it turned out just on instinct or something. So you could say we got through that one by the skin of our teeth.

What I'm really interested to know is, if you just simply add up all the things technology can do for good or evil, you say what's in the good column? Well, we can feed more people, it can help us solve a lot of problems, it can help us probably generate energy quicker, it can probably end disease, it can allocate resources better, it can do a thousand things really well. It allows people to communicate better—all of those things, diagnose illness, and a million other things, and you say, "Well what's on the negative side?" Well it can be used to engineer viruses, nations can use it to further totalitarian regimes, they can spy on their citizens, in toto, because they can recognize all the words that are being said and all the different things, and so at some level, you kind of do that calculus on both sides of that equation, and you have to ask the question: “Is the world becoming more brittle, and is asymmetry increasing?”

We've always been a world where there are more people who want to do good than evil, but if asymmetry means that a small number of people who are bent to do evil can do a whole lot more, like how does that net out? So make me the optimistic case, that we're always going to have problems, but we're going to muddle through...

Well there's several things I tell myself to be optimistic [about]. I mean I have grandchildren, I don't want the world to go down the toilet. One thing is, there's so many people talking about the plague that kills everything. Well, when I was growing up in Louisville, Kentucky, we knew this gentleman farmer and he had a herd of 100 cows and he was talking about the diseases they would get or other things that would happen, and he said, "but nothing ever kills all of them, nothing ever kills all of them." And that's kind of a ray of hope there.

Another thing, again still on the biological front, we worry about people being able to make some virus that they was no counter, there was no antidote to it, and here again I think we need to keep in mind that nature, every single organism, has been trying to take over the world for whatever hundreds of thousands of years. It's not like the organisms in nature are these sitting ducks, they're not like these naive things, they're these tough rats in an alley. So I think it's just because we make up some little dippy little toy organism that we think is going to kill the world, I mean it's like this wind-up duck and [when] you put it in an alley, it probably won't make much impact there.

Another thing regarding the government is, people are very contrary, they're sly, there's a lot of us, there's a lot of anti-establishment people. A lot of them are actually computer people, hackers in the kind of older sense of the kind of good, more noble person, not just in the stealing money kind of person. I think there's a safe bet that anything that big government wants to do—the resistance, the innate resistance in humanity will find a way to counteract them. So those are things I'm optimistic about.

There really are a lot of people that appreciate the natural world and don't want to trash it. If you look at the 50s, that wasn't even a factor that they thought about, they were just, “Let's pave things over, let's saw things down,” and there's arguments about it, and it goes back and forth, but it's not like it's ruined [the earth] and nobody's going to save it. I find great solace when I go out in the woods even. I live in Silicon Valley, but it's surprisingly easy just to walk 15 minutes and I can be in some hills and there's nobody there. Nature just keeps doing her thing, she doesn't stop, and you've got the water in the clouds and the plants, and the bugs, and those things aren't going away. They're doing fine, and they'll survive us. Maybe we're to some extent like a disease that the planet has. The planet can take care if itself, and hopefully it won't have to be by eradicating us.

So, ten years and nine days ago you had a brain hemorrhage. You wrote a book called The Lifebox, the Seashell and the Soul: What Gnarly Computation Taught Me About Ultimate Reality, the Meaning of Life and How to Be Happy. I don't think we can cover all of that in our closing time, but what did it teach you about how to be happy?

Let's see, it's hard to remember what I said such a long time ago. There's a number of points I came up with, and this is a book—The Lifebox, the Seashell and the Soul—and that title, is what I would call a dialectic triad. So the lifebox is the idea of a neural net style computer program that imitates a person. Then the soul is the thing that you've been talking about as consciousness, that we think of as sort of the antithesis, so you've got the thesis of: “Can the lifebox imitate a person?”

Antithesis is the soul that we can't find a way to emulate, and catch in the nets of our technology. And the seashell is the synthesis, and this is sort of a coded reference to Wolfram, because he went on this kick of looking at this kind of shell called a cone shell, that's found in the Pacific and it has this little pattern of triangles on it and it's a somewhat random looking pattern, but it's actually not random. It's performed by a biocomputation that creates this beautiful kind of network of little black and white triangles. So the point that I'm making there is that we can have a computation that generates something as rich and gnarly as the natural world, but it's still like a computation.

So anyway, how to be happy. The meaning of life is beauty and love, okay, and how can I be happy? First of all is turn off the machine. The idea is you really can't look at your smartphone and your computer all day long. The rich stuff around you is the natural world and it's the humans you live with. Secondly, see the gnarl. When you look at the natural world, gnarly is the word I use to mean something that lies on the border between order and disorder. Let's also call it chaos in physics, and chaos in health is not a bad thing. If you look at the shaking of a leaf in the air, that's a beautiful chaotic system, and it's gnarly, and it's wonderful. Feel your body.

A point we didn't really make earlier is there's this tendency to focus on your brain and think that's the seat of consciousness, but it's really your entire body that you live in, and being aware of my muscles. If you're a computer person, you have to remind yourself to get out of your chair and walk around, but taking pleasure in your body is important. And releasing your thoughts, that's another one. The idea is to not get hung up on worrying about the obvious things. The news is sort of very bad for you because it's a certain range of questions and topics that are being chosen by the people who run society and they're forcing you to think about. There's so many other things you can think about. Let's see, two more: open your heart, and people are the most interesting entities that you're going to relate to, and there's a tendency to view them as ‘other,’ and not as worthy of your attention or love if there's something about them that's different from you, but open your heart. I sort of think of the analogy of barbed wire being wrapped around my heart. I always need to remember just to cut that loose, get rid of that and just relax, not everybody's my enemy. And finally, be amazed. That's the basic question and we'll never know the answer to it, “Why is this something instead of nothing? Why does the universe even exist, how can I be conscious, how can I have a family, how can I talk, how can I see?” It's just a miracle, and be grateful for the miracle.

All right, what a wonderful place to leave it. Tell us, you're a fascinating guy with a million fascinating thoughts, if people want to read your latest thinking or how that has manifest itself in fiction, where should they start? What should they search for at their favorite book store and so forth?

Well in 2019, Nightshade Books is going to put out ten of my books. They're going to reissue nine of my older ones and put out a new one, coming out this month. You'll see it on Amazon. By August there's a novel called Return to the Hollow Earth, and it's published together with another called The Hollow Earth, and those are, you might call them "steampunk." They're set in the 1850s and it's about this screwball idea I've always loved that the planet Earth is hollow, like it has this hole. These guys go to Antarctica, and it's this farm boy [who] makes friends with Edgar Allan Poe, and they ride a balloon to the middle of Antarctica. They jump up and down on the ice and it breaks and they fall through this 1,000 mile hole to the inside of the earth which is full of light. There's some beings at the center who are setting up this sort of neon glow, and it's a great adventure.

It's like wonderland, they fall into a hole.

Well Lewis Carroll, he's certainly one of my forefathers, he's a mathematician who wrote crazy stories.

Then again people can just go to Amazon and type "Rudy Rucker." I saw that you had a list of everything there?

Yeah, I have really a ton of books out. There's like 40 books I've published, so you can see what you can find. If you got to www.rudyrucker.com/books, you'll find a list of them all.

Well, I want to thank you so much for a fascinating near hour, and when you have something new you want to talk about, you're more than welcome to come back, thank you so much.

Thank you Byron, it was a lot of fun.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.