Episode 53: A Conversation with Nova Spivack

In this episode, Byron and Nova talk about neurons, the Gaia hypothesis, intelligence, and quantum physics.

:: ::

Guest

Nova Spivack is a leading technology futurist, serial entrepreneur and angel investor.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today, I’m excited we have Nova Spivack as our guest. Nova is an entrepreneur, a venture capitalist, an author; he’s a great many other things. He’s referred to by a wide variety of sources as a polymath, and he’s recently started a science and tech studio called Magical in which he serves as CEO.

He’s had his fingers in all sorts of pies and things that you’re probably familiar with. He was the first investor in Klout. He was in early on something that eventually became Siri. He was the co-founder of EarthWeb, Radar Network, The Daily Dot, Live Matrix. It sounds like he does more before breakfast than I manage to get done in a week. Welcome to the show, Nova.

Nova Spivack: Thank you! Very kind of you.

So, let’s start off with artificial intelligence. When I read what you write and when I watch videos about you, you have a very clear view of how you think the future is going to unfold with regards to technology and AI specifically. Can you just take whatever time you want and just describe for our listeners how you think the future is going to happen?

Sure, so I’ve been working in the AI field since long before it was popular to say that. I actually started while I was still in college working for Kurzweil in one of his companies, in an AI company that built the Kurzweil Reading Machine. I mean I was doing early neural network there, that was the end of the ‘80s or early ‘90s, and then I worked under Danny Hillis at Thinking Machines on supercomputing and AI related applications.

Then after that, I was involved in a company called Individual which was the first company to do intelligent agent powered news filtering and then began to start internet companies and worked in the semantic web, large scale collaborating filtering projects, [and] intelligence assistance. I advised a company called Next IT, which is one of the leading bot platforms and I’ve built a big data mining analytics company. So I’ve been deeply involved in this technology on a hands-on basis both as a scientist and even as an engineer in the early days [but also] from the marketing and business side and venture capital side. So, I really know this space.

First of all, it’s great to see AI in vogue again. I lived through the first AI winter and the second sort of unacknowledged AI winter around the birth and death of the semantic web, and now here we are in the neural network machine learning renaissance. It’s wonderful to see this happening. However, I think that the level of hype that we see is probably not calibrated with reality and that inevitably there’s going to be a period of disillusionment as some of the promises that have been made don’t pan out.

So, I think we have to keep a very realistic view of what this technology is and what it can and cannot do, and where it fits in the larger landscape of machine intelligence. So, we can talk about that today. I definitely have a viewpoint that’s different from some of the other pundits in the space in terms of when or if the singularity will happen, and in particular spent years thinking about and studying cognitive science and consciousness. And I have some views on that, based on a lot of research, that are probably be different from what we are hearing on the mainstream thinkers. So, I think it will be an interesting conversation today as we get into some of these questions, and probably get quite far into technology and philosophy.

Okay great! Let’s gradually peel that onion. Let’s go through the outer layers of what’s real today, what’s likely tomorrow, where you think the hype is, where do you think consciousness from, what do you think the machines will be able to achieve, and so forth. So, let’s gradually get there.

You start off with some qualified statements, that yeah it’s exciting, but it’s overhyped. So, do you think that we’re going to have another AI winter? Because many people would say the opposite, you think it’s finally like consistently delivering things and because of that, it will always kind of—even if it drags a little bit—it’s never going to be like, “Oh that AI is not going to…”

I think that the AI winter will occur, but it will happen more in the venture capital arena. Yeah, machine learning technology, as distinct from what some people would call AI, but machine learning is certainly mainstream today as a result of the availability of cheap resources in the cloud, and libraries which enable this mass-scale machine learning in the cloud.

So, that’s real, it’s finally affordable, and it does deliver results in certain areas. So, that will be a part of more and more products and services going forward, and there’s no question about that. However, just mentioning AI in the elevator pitch of your company will probably not result in a funding round within about a year.

So, with regard to what’s real, Kevin Kelly says it’s like electricity, you’re just going to plug into the AI wall. Andrew Ng who’s not the one to make a preposterous statement, he also thinks that it is a ubiquitous technology that’s going to kind of touch every part of the organization. [He’s] on that side of the fence with what’s real and what’s doable in a reasonable amount of time. Would you agree with those sentiments, funding aside?

Yes, I mean we should probably define our terms more precisely. There’s machine learning, which is effectively a form of fancy classification, and that’s happening all over the place in the background to make apps better at recognizing anything from languages, to images, to things in videos, to what you say when you talk to a device. So that capability is going to be baked into the interface of all kinds of things. AI, on the other hand, real quote “intelligence,” such as what we are starting to see with services like Siri and Alexa, that is still a big question mark. I mean what we’re seeing today with simple services that can answer questions or even orchestrate simple tasks, you know is not really that impressive. There is not really much or any reasoning taking place. So, you know, that’s really the question. What about reasoning?

So, let’s talk about that for just a minute because there is no consensus definition for what artificial intelligence is and that is for two reasons: one, because there is no consensus definition for what intelligence is in general, and second, because of the word ‘artificial’ which people differ on what that means.

Other than the word “artificial” and the word “intelligence,” it’s like a perfect phrase. the most simple definition would be “that it reacts to its environment.” So, your sprinkler system that comes on when your lawn is dry, that is actually a form of intelligence. But you set the bar incredibly higher. It isn’t a debate, just a clarification. [How] are you using the term?

Well, maybe the way to discuss it may be in terms of a continuum with different levels of intelligence in that continuum. So, yeah I mean, by one definition, any system that fights entropy could be considered intelligent. That’s a pretty low-level definition of an intelligence, and even a rock would satisfy that. As you move up the scale, we now see integrated information theory—you know more advanced definitions of what makes a system quote “intelligent” or potentially by some people’s definition “conscious.” So how do we dial that in?

I would say that you’ve got a stimulus and you’ve got a response, and there’s a black box in the middle. And the question is, how can you characterize or differentiate the black box from other black boxes? One way to think about is, what would be the complexity necessary to define or describe the transform that takes place between the set of all stimuli and set of all responses that this thing can generate?

Some systems, it’s a very simple mapping, that’s a one-to-one mapping. Other systems, it’s incredibly complicated and conditional with a lot of reasoning and non-determinism in the middle. So that’s where we get into more advanced forms of intelligence that begin to resemble what humans can do. Right now, what we see are basically very simple systems that can classify things. Stimulus comes in, classification class task takes place, and a set of probabilities comes out for the things that it matches. And you know, you choose the things or thing that has the highest match.

And that’s about it, that’s very simple. You wouldn’t even really say there’s any kind of higher-level reasoning or conceptual thinking going on in these systems. So, that’s what a neural network does. As you move up, though, to an expert system, you know what was previously called an expert system is now called artificial general intelligence or AI, that’s where you have to have a model of the domain—a model of the world where that system operates in—and that’s a higher-level construct. It’s not merely a matching or a classification task, but often it involves higher-level concepts about the various actors, entities in that world, and the rules are for how can they relate, and then the system has to reason against that to figure out what is possible in a given situation. And sometimes it’s complicated because they can be competing goals or there can be more than one alternative for what can happen.

So, that’s where you get into a more sophisticated form of intelligence that’s far beyond what neural networks today are doing. Neural networks don’t maintain these kinds of high-level constructs, and basically, they really can only address situations for which they’ve already been trained to recognize and classify. A more sophisticated form of intelligence has a model of the world and is able to reason about situations that it hasn’t been trained on.

And you made a passing reference in your hierarchy of a black box that had a layer of non-determinism in the middle. Is that a theoretical construct or are you saying present day systems that have a…

Oh, I mean, you could say a neural network is sort of non-deterministic, right? But it’s very hard for a human to explain what a neural network does. Even people that make neural networks can’t exactly tell you what’s going on inside. It’s too complicated.

So, how is that non-deterministic? I mean the machine it’s running on is deterministic and therefore…

The machine is deterministic, yeah. Although, in a neural network, you could say, in many of them that if you are able to see every single computation that happens, you would be able to deterministically say what the result is going to be. However, it’s not clear that the human brain works that way.

Quantum mechanics doesn’t necessarily work that way, and if human intelligence or higher-level intelligence is deeply connected to the substrate, and that’s a deep question. But if it is—and you know there are people like Penrose, who would say that it is—then there’s a level of quantum uncertainty in the system, where probabilities and interference patterns play a huge role in what the brain does. If that’s the case, it may not be possible to determine or strictly predict what the system will do.

So, let’s do that because you’re right that Richard Penrose says that there are problems that can be shown to not be algorithmically solvable that the human brain can, therefore, solve. So, he suggests that our brains [do] things that classical computers, as we understand them, could not replicate. You make reference to that. Do you concur with it? And if so, what are the implications of that?

Well, I believe that he’s right, but I don’t think anybody has answered this question scientifically. I don’t think there’s a definitive answer. But my hunch is that that is correct, and that’s because I think there is a deep connection between the human brain and the substrate that it’s running on, which is quantum mechanics.

I believe that the human brain is a quantum computer, not a classical computer. So, you know, that’s an important distinction because quantum computers are capable of doing things that the classical computers cannot do. And that’s really what Penrose is getting at. He basically believes that microtubules in the brain are quantum resonators and that’s where the computation is taking place, which would mean that computation is happening at many orders of magnitude smaller scale than neurons. And that a neuron itself is a kind of supercomputer.

If that’s the case, then the activation of the neural network in the brain is a very high-level phenomenon, as far as what we see today. It’s very high-level emergent phenomenon from a system that is maybe operating much closer to Planck scale, in fact. If that’s the case, the amount of computation happening in the human brain is vastly greater than what we think it is today. And therefore, if that’s the case, the singularity, if it ever happens, is much further off in the future than Kurzweil thinks.

Well, this is exciting to talk to you [about], because as you alluded in the beginning, this is certainly, I guess you would call it a more minority viewpoint. So, it’s great to hear somebody who’s so well versed in it to advocate for it. I’ve never really fully understood though why the microtubules in the brain would be any different than the microtubules in skin cells or in the cells of a bacteria—

They are not necessarily, except that the brain is, you could say, kind of wired into that level. It’s a learning machine that’s wired into that layer. So, if it’s the case that—and forget about whether it’s microtubules or something else—if it’s the case that quantum level activity has a major impact on what neurons, axons, and dendrites are actually doing, you know what’s going on in synapses, you know if it’s really tied to the quantum level, then it opens up some big questions about what type of computer is the brain.

You know, I think a really strong supporting point for that decision is, sometimes people look at the brain and say well you know there’s a hundred billion neurons, so it makes sense that we can’t model yet. But you know there’s been a project underway for twenty years the nematode worm, arguably the most successful organism on the planet. Like ten percent of everything that is alive is a nematode worm.[and] there are as many neurons in its brain as there are Cheerios in a bowl of cereal, like 327 or something. Yet, we don’t even have the beginnings of being able to model that brain and create, as it were, a digital microtubule, a form of digital life

Right, they’ve been trying to do that. I mean we’re making primitive models of neuronal systems.

And you think that’s a fool’s errand to try to model the brain because we’re looking at a neuron at most some kind of a binary—

It’s not a fool’s errand any more than, you know, using systems of differential equations to model physics is a fool’s errand. Right? You’re kind of chunking reality at a certain level and making various inferences about it based on a model that you’ve constructed. And that may have some predictive power. That doesn’t necessarily have explanatory power, and there may be things it can’t predict.

I was overstating that to say that perhaps people who think, “Aha, we’re going to build this, and you’re going to have emergent intelligence come out of it, emergent consciousness come out of it, that no matter how—

Well, let’s make a distinction, you certainly can have emergent intelligence even from very simple systems. I spent years working on cellular automata. Many people listening are probably familiar with Conway's Game of Life, and maybe A New Kind of Science by Wolframor Ed Fredkin’s work at MIT, [which] goes back to Stanislaw Ulam’s work and to Von Neumann. All of these kind of models of finite cellular automata, artificial life from the Santa Fe Institute in the ‘90s—all of this work on emergent computation and chaos theory, that’s all about how systems can do intelligent things with very simple rules and local rules operating on simple components. So, you know systems can actually be quite intelligent in a completely bottom-up and emergent fashion, and neural networks illustrate that as well.

They can respond to a range of stimuli with outputs that are optimally determined based on the stimuli, and that can be according to some fairly complex functions if you will. And when you start getting systems of these little stimulus response systems, and you wire them up to each other, into feedback loops, you start to get very dynamical complex emergent systems which are extremely interesting.

And that’s why, you know, [in] The Game of Life—very simple rules can generate outputs that seem like bacterial cultures. [It’s] very, very interesting. And Wolfram has also shown in his work that there is a level at which systems can compute in a way that is basically too complex for us to understand, and that all systems that achieve this level are effectively computationally equivalent to each other. So, that’s interesting. That is a form of intelligence and that’s the problem with this word is that it’s a very loosely defined word.

But you know that is not necessarily a form of cognition or human-like cognition. We have to really make a clear distinction. You know, it’s interesting that our vocabulary for talking about cognition and intelligence is so primitive, we have so few words for making distinctions in this arena. You know, it’s said that Eskimos, I can’t remember how many different kinds of snow they have words for.

You know we have a lot of words for different forms of the capital in the west. That seems to be something we’re very good at making distinctions about, but intelligence is an area where we have a fairly blunt instrument today for talking about it. There is not just one form of intelligence. There are many levels of intelligence. In fact, intelligence might be a multidimensional space. There might be several dimensions that we need in order to describe intelligence accurately.

It’s funny because Wolfram, in A New Kind of Science, which you refer to, speculates that you can actually generate the entire universe, everything, you know you and me, everything, all the Eskimos and their words for snow and all that, and you know, with very few lines of code.

Well, that’s the hope, and it stems from Ed Fredkin who started the computer lab at MIT and was one of the first people to take the original cellular automata ideas from Stan Ulam and start to apply them in Physics. So, Ed Fredkin started the field of digital physics, in my opinion, and Wolfram took it up years later and really innovated it.

But lots of others are working on that space. Toffoli and Margolis at MIT, who I worked with, built a parallel computer just for cellular automata computing on physics and I worked on that machine. You know, Danny Hillis worked on that at Thinking Machines. But what Wolfram has done is actually explored the space of all possible cellular automata in a way that nobody else has and systemized it. And I don’t think anybody will ever understand his work in the near future. I think it will be fifty years before people fully understand and appreciate his work, maybe longer. But it’s important work actually.

In any case, the point here is you can build all kinds of emergent systems, but making an emergent system doesn’t necessarily mean that that system has a potential to be as intelligent as what we consider a human to be. You can potentially simulate or build a universe this way. Wolfram’s trying to find a simple underlying graph automata, a kind of a network automata if you will, it uses simple graph rules to search for something that’s similar and potentially a candidate for our physics. It’s possible, [that] our physics is reducible [though] it may or may not be as reducible as most people think. But if it is, there may be a way to describe it with simple emergent rules, simple computational rules that can generate this behavior from the bottom up at a very low level. That’s what he’s hunting for. If that’s true, the same kind of network automata that describes the fundamental layers of the universe, whatever that layer is, ultimately would then be generating, of course, everything that emerges from it, including the brain.

Let’s talk about that reducibility of physics. So, for the benefit of the listeners, there’s two kinds of emergence that we may be referring to here. One of them is so-called weak emergence. Emergence, broadly speaking, is when systems take on characteristics that none of the individual components have. So, you can spend a year studying oxygen and a year studying hydrogen and never kind of understand that if you put those things together, they become wet—this brand-new thing. But once this happens, you’re like, “Ah, I can kind of see how that happened.” That’s weak emergence. Everyone agrees that happens.

But then there’s idea that there may be something called strong emergence, where things as individual components assume characteristics that in a system none of the individuals have, and there actually is no way to reduce it back to that. You can never look at it and say, “Oh! I see how that’s happening.” And some people reject strong emergence because it seems almost an appeal to some kind of mysticism—which adherents to it say emphatically it’s not, it’s a property of science as it were that we just don’t really understand. What do you think on that topic?

Well, it’s interesting, there’s been some research lately that’s starting to show it is possible to describe emergent-level behaviors that cannot be reduced to the underlying system. Systems can generate new levels of structure that cannot easily be reduced back down. I’ve always had a hunch that there’s something like that going on in the universe. Another way of thinking about it is that there may be feedback in both directions. It may be that it’s not just bottom-up, but there’s you could call it feedback from the top-down. So, it’s very hard to understand what’s going on if you’re only looking at things in a bottom-up emergent view.

For example, it has been shown that, or people are theorizing that the physical laws can be locally variant, depending on what’s actually in space in a region. Something about what’s going on in a region space can actually impact the underlying physical laws in that region, to some degree. If that’s the case, that’s an example of feedback in both directions. We also see that happening with the human mind and brain. We are able to use our minds to change the underlying behavior of our physical brain. So, there’s a high-level construct, “the mind” if you will, and it’s pushing activity back down potentially all the way to the quantum level. So, if that’s the case, that feedback goes in both directions, then it’s extremely difficult to really unpack the causality behind anything.

If the brain is a quantum machine and we read in the media that we’re building quantum computers—I saw a person at Google saying that within ten years, all deep learning will be done with quantum computers— [then] are those two things analogous? Are the quantum machines we’re building the same use of the term quantum is as our brain being a quantum system?

I mean ultimately yes, in the near-term no. The quantum computers that we’re building today are quite simple. They have basically a few cubits, a few processors, and the kind of interactions that are possible within and between them are fairly simple. What the brain may be doing could be far more complicated. The brain may be making use of quantum tunneling and non-locality, you know the EPR effect. The brain may be deeply based on the collapse of the quantum wave function being conditioned by activity across many processors rather than within just one. Whereas in a cubit, you’re looking at the collapse happening in just one processor.

So, you know, I think that it’s very likely that there are quantum effects across the neurons, not just within the neurons, and potentially even between different minds because quantum systems can resonate with other quantum systems. Once you get to that layer, you can’t really even confine it to the physical skull anymore. You could really say that the quantum field, which encompasses the entire universe, is intimately connected to what each different brain is doing. And it’s likely, in my opinion, that there’s a deep physical connection to the substrate. This is where my views kind of differ from what I would say are the views today of people who are overly-enamored with the metaphor of computers.

There’s a view which I call “magical complexity theory,” which is very widely accepted today, which is that if you just add enough things to a system and make it complicated enough, it just suddenly magically becomes intelligent and consciousness. That’s kind of like the Skynet theory, you know just keep adding stuff, and suddenly the Terminator appears. I don’t believe that. I don’t think that it’s just a matter of complexity. I think that there’s more to the story. I don’t think consciousness is merely information. I don’t think that simply having the information to describe a conscious system would be equivalent to the consciousness of that system. So, I’m probably more in the camp of people like Searle who believe that there’s something kind of unique and special about consciousness, that is not the same as information.

Yeah, I want to get to that, all of that, but you’ve got so many things. I keep making all these notes. So, backing up maybe forty-five seconds, you, therefore—and I’m putting words in your mouth, so I could be completely wrong—wouldn’t dismiss out of hand things that somebody else might regard as some form of you know ESP or telepathy or telekinesis or these pseudo-sciences.

No, I mean I’ve seen that stuff happen. I’ve witnessed that stuff happen, and I even have seen it in the lab. So, I know that stuff is possible, and I think those terms are just things that we don’t have the physical explanations for today, but that doesn’t mean there isn’t a physical cause.

Fair enough. Do you think that in the quantum aspect of our universe, you can find scientific answers to those questions?

I mean it’s easy to throw the word quantum at any problem and say that’s the new answer for everything. So, I’m not going to say that’s the only thing we need to explain what’s going on. It’s definitely the next step. There’s probably a step beyond what we think of as quantum mechanics today because there’s still a lack of connection between the quantum explanation of the world and the relativistic explanations of the world. Those two layers or levels scale aren’t connected yet. That’s really the missing piece. My guess is the big insight is going to come from graph theory, topology, and that’s in a way what Wolfram is trying to do. He’s trying to make that connection between the quantum level and the relativistic level by unifying the systems with a type of graph theoretic approach.

Others, superstring theory and others, they are also using approaches to build that bridge. There may be an information theoretic approach or statistical approach that builds that bridge. These will probably all turn out to be equivalent, in the end, to some very simple universal rule and that’s the hope of digital physics. I actually believe it’s possible to get there. But, when we get there, the universe may end up looking a lot more like a network than what we think of it as today.

So, do you have an opinion on the Gaia hypothesis that the entire planet is…

So, there is a wonderful book published—actually, I have it right here, it’s called Living Systems, and it was by James Grier Miller. It’s out of print but it was published in 1978, and it’s the Bible on this particular question about what is a living system. What they found is that there are I think nineteen different subsystems that all living things have like ingestion, excretion, digestion. Any system that has these nineteen different systems could be said to be a living system, and by that definition, not only a cell, a person, a beehive, but also a city, a company or the entire planet or galaxy they could all be considered living systems under that definition. Now, they might all be living systems, but that doesn’t necessarily mean that they have the same level of intelligence.

You know, it’s an interesting question. Take a galaxy or a planet. If we just take our galaxy, you know that includes our planet, that includes us, then you can say that there are intelligent things in there. But that doesn’t mean that the planet is intelligent, or the galaxy is intelligent just because it has pieces that are intelligent. You could say there’s some intelligence in it, but the whole system isn’t necessarily intelligent. So, what makes a system intelligent and what defines the boundary of its intelligence?

So, we talked about Penrose earlier when we were talking about artificial intelligence, but Penrose put forth his theory in part as an explanation of consciousness. I am assuming you have strong views on that. Where do you think consciousness stems from?

Alright. That’s an important question. That’s probably the most important question in my opinion. I’ve been researching this from a lot of different angles. I’ve looked at it from the physics angle. I’ve looked at it from the computer science angle, and I’ve looked at it from the philosophy angle and even from the religion and spirituality angle. There’s a lot of different approaches to understanding consciousness, and my view is that consciousness is a lot like electricity. It’s already there. We don’t create it. There’s a fundamental field, if we will, which in the right kinds of systems, emerges as what we think of as consciousness. It’s baked into the physical substrate, that’s my view of consciousness. So, I don’t think we ever create it or destroy it. I don’t think it’s something we synthesize, just like electricity.

You know, we don’t create it, we don’t destroy it, but we are able to make it emerge in certain kinds of systems. You know whenever we make a system that’s powered by electricity, we’re not creating the electricity, we’re just taking the electricity that’s there and channeling it through a certain kind of system. And I think that what we think of as consciousness is something like that. That’s distinct from what we think of as intelligence. Intelligence to me is information processing; itis the ability to manipulate systems of symbols or abstractions according to logic or some form of rule which could be statistical. But you know intelligent systems that use some kind of rules, or system of rules, to manipulate symbols or abstractions can be said to be intelligent. But consciousness is a different thing altogether. There’s some nuances here.

First of all, when we say ‘consciousness’ here in the west, we’re referring to a special kind of experience which has a subject and an object in that experience. There’s an aspect of this experience, which is knowing what’s taking place, and then there’s what it knows. So, that subject-object or dualistic view of consciousness is a kind of coarse definition of what’s going on. That structure can be simulated and modeled and built into a system, and it can appear to have a structure. What I’m interested in is that subject part of the experience. If you zoom in and turn your focus at the subjective part, the subject that is supposedly knowing the object, my question is, “What is going on there? How does the subject know? What is doing the knowing?”

It used to be that people would posit a kind of a homunculus, you know there’s a little man inside our head that’s looking at what we’re looking at. Or it could be turtles all the way down, you know, a series of homunculi, all knowing each other. Those kinds of infinite regress arguments don’t work, for me. There’s a Buddhist view which I think makes a bit more sense, which is that this concept of a knower of a subject is kind of an illusion. It’s not that there isn’t anything taking place, but you can’t actually say it is a knower or a thing. In fact, the Buddhist view of consciousness would equate it to something a lot more like what the term ‘reality’ means to us.

That is, if something appears, it is known, but you don’t have to have a self or a person to know it. The knowing is actually baked into the substrate, so just by existing somewhere in the universe it’s being known, if you will, by the universe. A human, or a knower that’s equivalent to human, is a special kind of system that’s designed to create an intelligent model of what’s being known in a certain part of the universe.

That intelligence is an information construct that reflects something that’s taking place, but the knowing isn’t in that system, the knowing’s not in the construct, the information system that’s being generated there. The knowing is separate from it, and we mistakenly equate the two. So, the deep insight is that the knowing itself—this ability to know—doesn’t come from us, it isn’t confined to our brains, it isn’t something we can simulate.

It’s already there in the substrate. We appropriate it as something that we make, but that is a mistaken assumption. We think that it’s ours, and we even say that we are it, but the thing that says “we,” “me.” or “I” is a construct that is completely separate from the thing that is actually doing the knowing. That’s a very, very important insight. Therefore, it is possible to build systems that model the intelligence that we would call human intelligence, that seem to synthesize it and mimic it.

But the ‘knowing’ will not be in those systems. So, then the question is: Is the knowing simultaneous with such a system, so that you would say if you made a system that was sufficiently arranged, and it acted like it was knowing, then it would be knowing because the knowing is already there? That’s the big question.

My view is that it probably is possible to do that, but it may require wiring that system to the substrate in a way that we currently cannot imagine. It may be that in order for the inherent knowing capability of the substrate to coincide and be deeply connected to the dynamics of an information system, we need to wire those together at a very deep level. And that’s where I think quantum mechanics is probably going to be fruitful.

Because I think that quantum level devices may be capable both of the information processing that we equate to intelligence today, and the direct connection to the substrate, to the quantum nature of space and time, to unify those in a single device.

And in fact, I’m working on that. I haven’t talked about it publicly, but I actually built a device that seems to do this. It’s still experimental and needs a lot of testing, but I have built a device, in a lab somewhere in the world, that does this.

And you heard it here first in Voices of AI. I’m trying to keep up with you here, and there was a whole lot in that, I’m sure it’s a lifetime of thinking of yours distilled down into two minutes. Help me understand it. Let me give you two traditional problems you’ve probably heard of and help me understand in those problems what you think is going on, based on what you just said.

One of them is Frank Jackson’s “problem of Mary.” So, to set it up for the audience, there’s a person named Mary. She knows everything about color, literally everything, like in a God-like way. She knows everything, there is nothing about color, about photons, about her brain, about the cones in her eye [that she doesn’t know]. She knows everything. Yet, the catch is she’s never seen color before, and she’s been in this room with black and white all her life. And one day she opens the door, and she sees a red cardinal outside; she sees red for the first time. Did she learn anything new? In other words, is experiencing something different than knowing it?

And the second one is, most of the time when people want to understand the difference between intelligence and consciousness, often times you hear the story of when you are driving and you kind of just space out, and then all of a sudden, two minutes later, you’re like, “Whoa, I don’t remember driving here.” In that moment, when you were driving, you were undoubtedly intelligent. You were merging with traffic and all that, but you weren’t conscious, and that difference between how you feel, like “Whoa, I don’t remember driving here,” and how you felt one minute earlier, that’s also consciousness, a way to understand it.

So, qualia, redness, experiencing something, and this feeling of feeling are kind of two ways people understand consciousness. How would you explain those two examples using kind of the language and the model that you have just articulated?

Sure. In the case of qualia, in the case of Mary seeing color, there is definitely new information in that experience beyond her descriptions and symbolic representations of color. First of all, there’s direct data coming from the senses which is not captured in these descriptions, and that direct connection, that direct sensory experience you could say, is wired from the substrate all the way back to the substrate.

So, at the end of the day what’s really experiencing what the senses receive? The data comes in, it gets interpreted, it gets classified, it connects to a conceptual system, but at the end of the day that conceptual system still doesn’t really see or experience the color. That qualia is something that only the substrate itself is capable of.

That’s what I’m saying, I’m saying that fundamentally, the nature of reality is, you might say, the unification of a form of emptiness or some people would call space or energy and the fundamental pure form of consciousness—which in this case is, I do not mean dualistic consciousness, I mean the very base-level awareness that’s inherent in everything in the universe.

That awareness is not intelligence; it doesn’t mean that a particle has a personality. It means that for something to even be, at all, the quantum wave function had to collapse. And in order for that to happen, an observation, of some sort, kind of had to take place. What’s making that observation? The substrate. So, effectively, for qualia to arrive, an observation has to take place; an observer has to observe something.

Who or what is the observer? The observer has to be something which is able to cause the quantum wave function to collapse. And for that to happen, as far as we know, it is not enough to merely make a description of the system. Something actually has to observe it. And that’s what I’m testing with the device that I have made.

And we are finding some interesting things, but what I would say at this point is that not everything is capable of observing. Only certain kinds of things are capable of making observations, and that is a clue as to how to differentiate between things that are conscious and things that are not, in the way that we think of consciousness.

Now, in terms of the second question, I think that fundamentally, we are trying to describe a science that we haven’t arrived at yet. We are like people in the Baroque period of science trying to describe Post-Newtonian Physics that haven’t emerged yet. So, it’s extremely difficult to put these into terms that make sense in today’s framework.

What I would say is that it’s going to take time for a vocabulary to emerge, and for repeatable experiments to be formulated that can, bit-by-bit, reveal what’s really going on here. But I think that ultimately the idea that a system that’s functionally equivalent to something that we think of as intelligence is automatically conscious is wrong.

So, would you say, based on what you just said, that consciousness is the greatest scientific question we don’t even know how to ask?

It’s a beautiful way of expressing it. Yeah, I think consciousness is actually the same question as what is space? What is time? What is energy? And those are questions which actually, maybe cannot ever be answered, because there’s nothing more fundamental that you can use to describe the answer.

Or is there?

There could be. We don’t know. But the point is we’re inside a system. We’re by-products of phenomena that are emerging in a system and we, those phenomena, are trying to ask questions about the system. It’s like we’re in the Matrix, and we’re trying to ask about what is the Matrix in. And it may not be the case that it is possible, within the system, to explain or describe what is beyond the system.

In a way, you can look at Gödel, and Gödel showed that any system that’s equivalent to what we think of as mathematics, it will either be inconsistent or incomplete. That will mean there will either be things that you can say that are true but are contradictions in the system, or there will be things that are true that you cannot prove with the system.

And so, you only have only those two choices: You can either have something that’s consistent, but has contradictions, or [something that’s] incomplete—it can’t answer everything that you know is true. That is probably our fate, scientifically. At least, if we’re trying to make any kind of formal system, it may end up that that’s the dilemma.

It may be that to truly understand what is beyond the boundaries of the universe, the mind, we have to be willing to go to a kind of a direct experience, and this gets into kind of a realm which is outside of science, and some might not view it as even valid to explore.

But it may be that to truly understand the substrate, you have to experience the substrate, that you can’t actually measure it with anything because there’s nothing deeper than the substrate to measure the substrate with. And if that’s the case, there are limits to what science is going to be able to tell us. That doesn’t mean science isn’t going to be useful; science is going to be very useful for explaining what happens in the world, how the world works for the most part.

But there may be a point at which the explanatory power of science ends, and there’s still phenomena that happens, that just cannot be explained. They are the places where the system is incomplete and always will be.

And that’s what you mean by “outside the realm of science.” It isn’t literally outside the realm of science. It’s outside the realm of science to offer an interpretation.

It’s outside the reach of science because at the end of the day, you have to have a formal system that you’re working with. And in any formal system, you can go back to Gödel about this, there’s either going to be contradictions or holes. That’s what Gödel is saying.

Now that means, no matter what we do in science, if science is equivalent to a formal system, there are going to be things that just don’t make sense, that don’t belong together in the theory, or there are going to be holes, things that we know are true, but we can’t prove. So those holes are beyond the reach of that formal system, but you can prove that there’s something there.

And that means that it is possible, in fact, it’s logical that in a formal system, there will always be things that the formal system cannot fully explain. No formal system can be comprehensive and universal. Therefore, that means that science certainly can take us very far, but it cannot take us infinitely far. There are things that can’t be described or explained with any scientific theory.

I want to back up just a minute because you said something really fascinating that I want to go a little deeper on it. So, this is a reference to “not everything can make an observation.” To put that in the context, I’m occasionally asked like, what’s the craziest thing you know? And it is undoubtedly this phenomena which is, if a listener is interested in it, google “double-slit experiment.” This is something that physics has known about for a hundred years. It’s proven. Everybody agrees that it happens, and you can do these experiments with what a university physics department would have.

I mean, the experiment is in no way contested. In fact, Richard Feynman said that if you understand that, you understand quantum mechanics. The problem is nobody understands that. The “that,” in this case, is it can be shown that particles don’t exist unless somebody observes them and forces them to become something, to be something.

I would even say, more specifically, the double-slit experiment shows that the path that a photon takes is effectively indeterminate until an observation is made. When the observation is made, the system seems to congeal, or we say that the wave function collapses and then the system seems to have made a choice about what happened. So, the outcome, if you observe the system, [is that]you generate a different output than if you don’t. That’s effectively what this is saying. If you look closely to see that a photon goes through this hole or that hole or this slit or that slit, you get a different result than if you don’t make that measurement. And that’s really weird. That is often what is called wave-particle duality, the fact that light behaves both as a wave and as a particle. Nobody knows why.

[I want] to add quickly to that—, and then I want to hear your thoughts on it because we’ve only really thought of observers as people, and so I want to talk about that. But to add more complexity, it seems that the problem works across time in that hypothetically, in your mind, imagine there’s a video recorder recording this experiment, and there’s an outcome. Let’s say that the outcome is recorded on film, which one it went through, left or right or neither. If you take that tape and take that film and you put them in a box for a hundred years, and then a hundred years later, you open them up, and you watch that video and then develop the film, one thing will happen. If you take a hammer and smash the video and develop the film, there will be a different thing on there. And there’s two schools of thought with this, virtue of causality or what have you, but it even seems to happen out of time. So, what is your take on that, and what is an observer? Is a bacteria an observer? Is a rock an observer? Is a dog an observer?

There’s actually something called the “quantum eraser,” and it’s an experiment that establishes that wave function collapse can even be caused backwards in time, and that’s an experiment that proves that. So, it is possible to make an observation that seems to somehow go backwards in time and change a system, prior to when you observed it.

The question is, as you asked, what is an observer? What’s capable of making an observation? One way that physicists will talk about that is an observer is anything that’s capable of creating path information about a system. That is something that is capable of determining or measuring the path, or potentially the path that some particle like a photon traveled over. If a system is completely isolated and no path information comes out of that system, that’s equivalent to saying it’s not being observed. For a system like that, you can’t say what state it’s in. So, you’ve heard of Schrödinger’s cat in the box. Until an observation has happened, no path information is coming out of this system, nothing’s observing it, and therefore, the cat is neither alive nor dead.

So, one way of thinking about it is, some special type of system or some phenomena is capable of interacting with a quantum system and sucking some path information out of it or tapping some path information out of it, and path information is probabilities about what things did. So, it’s happening at almost like a pre-physical level, these probabilities. If you can get these probabilities about what a system’s doing out of that system, then you are observing it.

Anything that can interact with something else, in a sense, you can say that there’s an observation, a mutual observation taking place because if something can interact with something, that’s causing a kind of wave function collapse. But the question is, why is it then that when you create a double-slit experiment, the wave function doesn’t immediately collapse? Why doesn’t the physical apparatus observe itself and cause the collapse?

That does not happen. So, there seems to be something special—you could say it’s happening at a mathematical level—about getting the probabilities about one system into another system. And we don’t understand why or how that happens, or what is capable of making that type of observation. It may be that an appropriately configured video camera or measurement system can do that.

For example, in the double slit experiment, it’s kind of a misconception that it’s always a human observer causing the collapse. In fact, in the physics experiment that’ve been done, it’s not a human observer, it’s a physical system that makes the observation. But ultimately, a human looks at that data and nobody can really say what’s going on there prior to a human observing it.

And is there anything special about us making that observation? Actually, it’s probably testable. People haven’t really created the experiment to test it yet. I mean could you configure double slit experiment and put a rat in front of it and then an ant and a mouse and a human? Could you configure these to determine whether these are all capable of ultimately causing the wave function to collapse? It might be.

I think at some level, we know it isn’t because all the bacteria in the room would have collapsed it.

Yeah, I mean they’re not necessarily interacting with that apparatus. The question is: if it was possible to get path information out of a system, but it required a certain ability to observe that path information. What is necessary to make that observation? Is it a video camera? Is it a photon detector? Or does it have to be—

I saw a video, and I only read the subject of it, because I knew we weren’t going to talk about that in this chat. But it was you saying you believe we live in a simulation. If that were the case, and I don’t ascribe to that, but if that were the case, couldn’t you say the interpretation of that experiment is that the computer running the simulation, it’s just coding, it’s just a hack? Like why should the computer work out all the details of where that atom is going, where that photon’s going to go if nobody’s ever going to see it? So only if there’s somebody in the room does this simulation have to spend the CPU resources to make it, and that everything outside the room doesn’t even exist, because nobody—

Yeah, that’s kind of like how tiles are computed in massively parallel games. It’s interesting actually that similar optimizations that you see in these massively parallel games might apply to making a computer that could compute efficiently at universal scale, but we’re making a mistake in assuming that there are finite computing resources in the universe. That’s not something that we know is necessarily true. We may not need to optimize like that. In any case, I just want to make sure it’s clear[that] I’m not asserting that we live in a simulation, although if it’s possible to live in a simulation, then I think it’s highly likely that we live in a simulation.

Right, but the not-possible is the fact that we are conscious within the simulation, so it would need to be—

Well, I think that it may be that consciousness is the gateway to determining the answer questions like this, because if consciousness can’t be simulated, then the universe can’t be simulated—because consciousness is occurring in the universe.

So, we’re running out of time here, and I’m eager that you have all of these thoughts. Well, let me ask one quick question before that. So, this device you made, the gonkulator—I don’t know if it’s been named yet, but “the gonkulator,” is that a pure science experiment, or is it applied science?

It’s a science experiment. It is not conscious, and I’m not claiming that I made a device that’s conscious. It isn’t doing that. But it is a device that seems to potentially be able to measure observation.

And so, where do you land with all of this? All of these thoughts are racing around in your head and—

It’s not really like that; it’s not. Being me isn’t like all these thoughts are racing around in my head. I have different swim lanes that I’m working on. One of them is research into fringe technologies. I kind of give myself license to explore, but scientifically, so I have collaborators that I’m working with who have the physics and engineering capabilities to test some ideas, and we’ve built those experiments and tested them. And we’ve been doing that for quite a while. And so, in that context, we’re several generations into a device, that right now is really a physics experiment to test some questions and ideas we had, some hypothesis we had about quantum mechanics. But it turns out that device, as is often the case with R&D, might give us some ideas for things with commercial potential in the future, particularly in the field of quantum computing.

So, on that note, if by night you are contemplating the mysteries of universe, by day, you are the CEO of Magical. Can you tell us about that? What it is, from a business standpoint, but also what is it about [Magical] that lights you up when you have a million things you could do today? Why start a company called Magical?

Well, Arthur C. Clarke said that any sufficiently-advanced technology is indistinguishable from magic. And that’s what Magical is about. We’re about originating and incubating sufficiently-advanced technologies, things that seem magical today, and bringing them to market. So, Magical is really a vehicle for me and scientists, engineers, and collaborators that I have at different labs and universities, as well as some startups we formed, to conduct R&D and bring that to market in some really breakthrough technologies, all of which have the potential for profound global impact. So, I think about what’s my legacy going to be? What am I going to do that really has meaning?

It’s great to make interesting technologies and to make money, but can we do that in a way that also potentially has a lot of profound benefit to improve the world, to improve the conditions that people live in, to heal the environment, to solve the energy crisis, to move medicine forward in big ways, to bridge the gap between mind and the machine? These are the things I’m working on. You could think of it as my own moonshot lab or moonshot factory.

The difference between Magical and an incubator is we don’t fund other people’s business plans. Everything that we’re doing was either internally originated by me and my team or originated by a scientist that we collaborate with. What Magical does is it specializes in taking ideas which VCs normally wouldn’t touch because they’re too far from market, they’re still in the lab. We know how to work with scientists and engineers when things are still in the lab and bring them to market, and it’s a multi-year commitment, and it is daily involvement.

Incubators don’t do that, accelerators also don’t do that, and VCs definitely don’t do that. So, we usually get involved much earlier, and in a much more day-to-day way, and we do this through the role of a venture producer, which is sort of a new role which doesn’t really exist in Silicon Valley, but it does exist in Hollywood. A producer is a person who is a professional project creator. Not just a project manager, but they really take an idea or source the idea, and develop it into something that’s viable, and produce it all the way to a launch. In the movie industry, producers are skilled at bridging the gap between the money and the talent, and getting these projects to market, and they usually manage a pipeline of projects at different stages.

The closest thing Silicon Valley has to that is maybe an EIR, but EIRs inevitably want to become CEOs or VCs, and in my model, a venture producer doesn’t want to do that. A venture producer wants to be a venture producer, wants to be involved in the fun, exciting part of the first three to four years of a company, and then they move out into an advisory role. They never become the CEO. They don’t become a VC. They have a pipeline of projects under production and when some mature, that opens up room for new ones to be nurtured.

So, I think that this is a problem with the Silicon Valley model, is that startups feel that, and VCs often say, that they need a CEO way before they need a CEO, in fact. They need a producer. They need somebody who’s brought a bunch of companies out to market. They don’t really need a professional CEO until they’re operating and making money. And that’s usually years into a company. In fact, most startups can’t attract the kind of person that they would really want, in the early years, to be their CEO.

And so, they often call somebody a CEO who has no business being a CEO, or who won’t scale. And typically, the VC model disincentivizes people to step out of the CEO role. If the founder acts as the CEO, inevitably there’s an epic battle between the CEO and the founder, except in very rare circumstances where they try to replace a founder-CEO with a professional CEO. But because the model is structured so that if you let go, you get washed out, typically companies don’t make good decisions about this.

Because people are holding onto equity, the model is misaligned with what the company really needs. So, I think a different model needs to come into play, where people don’t have to worry—the people who help build the company in the early days—don’t have to worry so much about stepping out of the way when the time is right, because we’ve disconnected their ultimate exit from that decision.

We’ve created a role and a way for venture producers to benefit and stay in companies, even if they eventually hire a CEO. So, what we call founders today, many founders are serial entrepreneurs who are actually extremely talented at producing ventures, and not great at being CEOs. Even I would put myself in the same category. Turns out, I don’t really like operating and scaling a business once it starts to get medium to large.

That’s not fun, to me. There are people who love to do it and who are really good at it—those are operators, operators CEOs. They love managing people and resources and money and operations. Great! Those people should be brought in at the right time. They can be quite destructive if they are brought in too early, and a lot of early-stage companies can’t attract good ones, because there are a lot of choices. They don’t have to go in a little startup.

What we need is this definition of a venture producer role, and a model that really supports that, and that’s going to actually optimize outcomes for entrepreneurs and investors in the long run.

Alright, well that sounds great. Let’s close on a big question. Tell me about the Arch Mission Foundation.

Sure, the Arch Mission is focused on creating archives, that’s why we call it “arch,” archives of humanity’s important data and protecting them over long time scales. And in order to do that, we would like to put these archives not only all over the planet but also off-planet. So, we have been working on a very near-term realistic plan to put a billion-year archive in space and on the moon. We can do that now with a special storage technology that is durable over those timescales and is capable of storing hundreds of terabytes in a form that can survive in the harsh environment of space, under extreme temperature and cosmic rays, for billions of years in fact.

So, what we are doing is working to collect the data—most of which is open-source data like the Wikipedia and other sources—and embed it into this special technology, the storage medium, and then deliver it to these remote locations. The idea of the Arch Mission is to do this on ongoing basis, to continuously update archs, replace them, and spread them wherever humanity goes. I believe that the purpose of intelligent life is to spread intelligence, and if that’s the case then the Arch Mission is a practical way to help fulfill that purpose.

First of all, it’s necessary in case of a non-zero probability of an extinction-level event. It is guaranteed that if you wait long enough, a comet or a meteor is going to impact the earth and wipe out most or all life on earth. Guaranteed that that will happen; it has happened before. But there are also man-made risks that we face, such as the potential for nuclear war, or genetic modification that’s somehow disastrous, the “gray goo” concept of nanotechnology, there are many, many man-made risks as well.

So, we actually live in an environment that is actually riskier than ever before, because of these risks that we ourselves are imposing on ourselves. Therefore, one benefit of the Arch Mission is a planetary insurance policy, planetary backup in case we wipe all of our civilization out, or part of our civilization. We’ll have a permanent archive in a place that can survive that, that we can eventually recover. Another reason for doing it is that no civilization that we know of has ever lasted more than a thousand or so years. There have been many civilizations, but on average, they’ve only lasted a few hundred years.

We know that there are ancient civilizations that we know very little about today. And the chances are, that if you looked over hundreds of thousands of years, or millions of years, very little of our civilization will remain because almost everything we’ve created is highly perishable. Our digital storage media, on average, will last about fifty years. In some cases, about a hundred years, but that’s about it. Cement structures don’t last that long. Nothing that we’ve created will really be around even in ten thousand years; it will all be gone.

The pyramids lasted a very long time, and that’s all that’s really left of the civilization that created them. But they don’t carry very much data. They’re a pretty low-resolution storage media. You know they are granite, there are carvings, there are hieroglyphs in this granite, and it’s not very high-res. But it’s possible to do something like the pyramid today, with new technology, in much higher resolution. So that’s kind of what Arch Mission is doing. There is a real-time benefit of this today, as well, which is that it’s an educational and inspiring project that makes people think about the fragility of our planet, makes people think about what’s important to preserve, and also stimulates the development of storage media and data transfer technology suitable for space.

And if we become a space-faring civilization, we have to come up with a way to extend our digital infrastructure into that environment. And that’s a very different environment than the terrestrial environment, with very harsh conditions. So, the dual purpose here is, in the present day, inspirational and generates new technologies that we need. And in the long-term, it may be the only thing that remains of our civilization, so it may be the most important thing humanity ever did.

So, this is my last question. You say that the purpose of intelligence is to spread that intelligence. Is that a purpose implicit in it or given to it?

I think it’s implicit in it. I think that when we finally really understand what intelligence is, we’ll look back at this Living Systems book that I mentioned by Miller, in which reproduction was one of the functions of living systems. And I’m fairly confident that truly intelligent systems also have that reproductive urge built in, in this case, to reproduce intelligence.

Alright, well that’s a wonderful place to leave it. Nova, this has been absolutely fascinating, and I hope that we can entice you to come back later. Thanks a bunch.

Thank you. Very enjoyable.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.