Episode 111 – A Conversation with Robert Brooker

Byron Reese discusses the nature of intelligence and Artificial Intelligence within the industrial internet with Robert Booker of Win-911.

:: ::

Guest

Robert Brooker is the Chairman of WIN-911, a technology company and part of the ‘industrial internet’ with offices in Austin, Texas, Mexico, Asia, and Europe. He holds an undergraduate degree in economics from Harvard as well as an MBA from the same institution. A person with an amazing entrepreneurial past. He is said to have brought the bagel to Eastern Europe, and although he denies it, some people say he brought the hookah to the United States.

Transcript

Byron: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today my guest is Robert Brooker. He is the Chairman of WIN-911, a technology company and part of the whole ‘industrial internet’ with offices in the US in Austin, Texas and in Mexico and Asia and in Europe. He holds an undergraduate degree in economics from Harvard as well as an MBA from the same institution.

He is a person with an amazing entrepreneurial past. He is said to have brought the bagel to Eastern Europe. Although he denies it, some people say he brought the hookah to the United States. I’m excited to have him on the show. Welcome to the show, Robert.

Robert: It’s nice to be here, Byron.

You’ve heard the show enough times to know I usually start off by asking “What is artificial intelligence?” How do you think of it? How do you define it? I’ll just leave it at that.

Artificial intelligence is semantically ambiguous. It could be that it’s artificial in the sense that it’s not real intelligence or it could be intelligence achieved artificially; in other words, without the use of the human brain. I think most people in this space adopt the latter because that’s really the more useful interpretation of artificial intelligence, that it’s something that is real intelligence and can be useful to the world and to our lives.

Sometimes I think of that as the difference between the instantiation of something and the simulation of something. Case in point: a computer can simulate a hurricane, but there isn’t really a hurricane there. It’s not an instantiation of a hurricane. I guess the same question is, is it simulating intelligence or is it actually intelligent? Do you think that distinction matters?

When I say ‘artificial’ as in the former definition, it seems on the surface to be intelligent. When you look further down, you determine it’s not intelligent. It may be helpful in terms of how I define intelligence. I like the standard dictionary definition of intelligence, and that is: the ability to acquire and apply knowledge and skills.

You could argue that a nematode worm is intelligent. It’s hard to argue that, for example, a mechanical clock is intelligent. Ultimately, different people are defining intelligence in different ways. I think it ultimately comes down to what people in the field are doing. They’re trying to make it useful when how it’s defined is almost on the side.

The most singular thing about AI and the way we do it now is that it isn’t general. I don’t mean that even in the science fiction artificial general intelligence [sense]. We have to take one very specific thing and spend a lot of time and energy and effort teaching the computer to do that one thing. To teach it to do something else, you largely have to start over. That doesn’t seem like intelligence.

At some level it feels like a bunch of simulations of solving one particular kind of problem. If you’re using the ‘acquire new skills’ definition, in a way it’s almost like none of it does that right now. No matter what, it’s limited to what it’s been programmed to do. Additional data alters that, but it doesn’t itself acquire new skills, does it?

I think the skills part is hard. The ‘acquire and apply knowledge’ is a little bit easier. In the case of a nematode worm, 302 neurons, what it can do is it can detect a smell and move toward the smell. If there’s food there, it says, “a-ha, this smell indicates food. When I smell it in the future, I’m going to go towards that smell and get the food.”

If the world later changes where that smell is longer associated with food, the nematode worm will start to not go towards the smell, learning that that smell no longer indicates food. Maybe some other smell indicates food. That in my mind indicates that the nematode worm is acquiring knowledge and applying knowledge. The skill part is harder, and I think that’s the same with AI. The skill part is very difficult. It’s not difficult for a chimpanzee or a human or some other animals, but I think it’s difficult for machines to do that.

The nematode worm, like you said, has 302 neurons, two of which don’t appear to be connected to anything. It functionally has 300. Don’t you think that amount of sophisticated behavior... do we even have a model for how 300 neurons [work]? Even if we don’t know the mechanics of it, a neuron can fire. It can fire on an analog basis. It’s not binary. The interplay of 300 of those can create that complex behavior of finding a mate and moving away from things that poke it and all of the rest. Does it seem odd that that can be achieved with so little when it takes us so much more time, hassle, and energy to get a computer to do the simplest, most rudimentary thing?

I think it’s amazing. The exponentialism of the nematode worm and real neural networks is incredible. For anyone who hasn’t spent time at openworm.org, which is the crowdsource effort to understand the nematode worm, I encourage you to spend at least an hour there. It’s fascinating. You think: ‘302 neurons, that’s simple. I should be able to figure it out.’

Then it’s all mapped out. Each neuron connects between a couple or maybe a couple dozen other neurons. You suddenly have 7,000 synapses. Wait, that’s not all. Each synapse is different. Figuring out how each synapse works becomes even more complicated.

Then there’s on top of that, the inner workings of a neuron. Change is going on within each neuron itself. I don’t know if this is the case with the nematode worm, but certainly in the case of the human brain and probably many other brains, there’s communication between and among neurons that takes place not in the synapses, but by exchanging chemicals. It’s incredible how just 300 neurons can suddenly become who knows how many bits. We really almost can’t even call them bits of information. It’s more of an analog concept, which has magnitudes more possibilities.

Viewed that way, the nematode worm is sort of an underachiever. It’s not getting a lot done with all that stuff it seems like, although they are 70% of all animals on the planet by one count. Would you agree that progress in artificial intelligence is moving quickly... or slowly?

It seems very slow. It’s interesting that most of your guests, at least from the podcasts I listen to, predict artificial general intelligence being 100 years or hundreds of years away. It does seem very slow. To your point a moment ago about how it’s very hard to transfer one thing to the other, we get visited by companies all the time in the industrial space. Industrial space is really good for artificial intelligence in theory because there’s really no or very little human language.

All the complexities of human language are gone because essentially it’s a machine. In the industrial setting it’s about: ‘how can you save a million dollars by using less energy? How can you make the defect rate of your product lower?’ These are all sort of readily quantifiable outcomes. Companies come to us that have created some sort of artificial intelligence to revolutionize or make industry much more efficient.

Typically what happens is that they’ll come to us because either they’re looking for funding or they’re looking for customers. We have a lot of customers, so they think we can somehow work together. They come to us and say oftentimes, “We have our first customer, and we save them a million dollars a year by making their process so much more efficient. If we could only apply that artificial intelligence to a thousand other companies, that’s a billion dollars’ worth of value. Therefore, we’re going to be great.”

You dig into it, and that one customer, the amount of human services, and this speaks a little bit to the issue about whether artificial intelligence will cause all these people to be out of work, there’s so much human interaction in just figuring out one project: all the normalization of the data, and then the AI is not quite figuring things out. A human intercedes and inserts another type of model based on human mental model. It’s almost like this notion that when humans and machines work together, you get a better outcome than machines alone. The nirvana or what people are trying to get at is that one thing, one AI that looks at all the industrial data. You don’t have any human language.

There’s a lot of things that you could call very simple even though there are a lot of complexities. The thing you want is something that will just look at all the data and figure everything out. No one’s been able to do that. It’s always been very specific to the context. Even in areas that should be simpler like industrial, which is more akin to playing chess or playing Go because it’s a game with fixed rules and fixed objectives, that are easily quantifiable, it’s still very difficult.

You know, of course, what the Turing test is and the idea that if... an AI masters language to a degree that you can’t tell if you’re at a terminal talking to a person or a computer, Alan Turing argued, ‘You’d have to say that machine is thinking’ even if it’s doing it differently. Let’s start with: you’re not a big fan of languages being some kind of a benchmark for the ability of AI. Why is that?

I think language is very complex. It’s incredibly imperfect. There are multiple definitions of words. The whole notion of language, it’s used to facilitate communication. Sometimes language is used to block communication or to create or instantiate different social classes to communicate things to some people you don’t want other people to know about. It just seems so complicated.

It’s hard for me to make sense of saying that the most important thing is language, whereas if an AI is able to look at a bunch of blood tests and MRIs, it can cure cancer and doesn’t understand language at all. Is that somehow inferior to having a simple conversation? One reason that AI has been successful, I believe, in playing games like Go and chess is that there’s no language involved in it. Maybe as we think about the best applications for AI, it could be useful.

I think it’s already being done to think about, ‘What are some things that can move our society forward with AI that don’t involve language?’ Fortunately, in the case of curing cancer, you have all this medical research that’s published in English. That’s a little wrench in that. There are surely ways that AI can be applied to non language-related things.

I saw a thing just the day before yesterday that said that all human languages transfer information at the same bit rate fundamentally, even though they’re all very different and even come from different protolanguages way back. They all probably have some inherent limit to what we’re able to do or at least what the vast majority of us are able to do. Language has to be accessible to everybody, not just a few people.

It’s interesting because you’re implying with that that it’s the listener who has limitations in terms of processing. I would think, for example, that in Chinese where you have all these different vowel tones, the combination of syllables in Chinese is actually greater than English. You’d think that some languages you would be able to transmit at a higher bit rate than others. Maybe to your point, it’s the listener that’s the constraint.

I guess the reason that people really want to crack the language thing is what you were just saying earlier. What you want to do on the industrial internet is say “Look AI, I just want you to make this part cheaper using less of this and more of this. Here’s what you can and can’t do.” I guess the hope is that if the computer can understand that, that it isn’t an arcane ability only able to be interfaced by a priestly class that knows how to code. It becomes an accessible technology to everybody.

I think you’re bringing up a good point, that if it’s the – currently you can do it with non- language, saying, “Figure out some way to conserve energy, to have the same output, but use less energy.” If instead of having a programmer do it, you have some less skilled machine operator say it in English. That would certainly be useful.

You mentioned job loss a minute ago. You said that the difficulty in applying AI to new tasks certainly acts as a break on any instant kind of loss of large amounts of jobs. How far do you take that? Are you among the folks that worry that with these technologies that automation really is going to reduce the number of jobs to a point that it becomes a problem for us?

I don’t think it will reduce jobs at all, personally. I can’t predict that for sure, of course. This notion of just what I observe, let’s take this case to the industrial world where you have a team of people that’s needed to figure out how a factory saves a million dollars in energy. You’re employing more people. You’re saving energy. Are you having to lay off a coal miner or an oil rig worker? Yeah maybe one or two or maybe none. It seems like that’s sort of a net increase in employment.

I’ll take another example. Let’s say you’re an entrepreneur and you hire sales people to sell things. Just to give some kind of numbers, let’s say that it costs $70,000 in salary to hire a sales person, and you lose money for a year. Then the sales person sells $100,000 a year. You make a little bit of profit, but then you have to wait a year and lose money. Maybe you’re not going to hire too many sales people.

Let’s say that there’s some sort of tool. Let’s say it’s an AI tool that’s so effective that the sales people can now sell double what they could otherwise. I would argue you would hire more sales people. This is a case where AI actually causes increase in employment, not decrease in employment.

I think you made this argument in the past that there’s been so many waves of innovation throughout history and never has it caused any major dislocation of employment. Why would this time be different? Some people argue that it’s happening faster. I think what we’re seeing with AI is it’s happening actually pretty slowly. Maybe that will change going forward but as it looks now... I think there’s an argument that can be made that it could happen dramatically and very fast.

I think that’s an interesting thing to explore. Based on what seems to be the consensus, even among most of your guests, it’s going to be a slow development. That will lend itself towards being less disruptive to the economy and to employment.

Do you worry about the privacy impacts of the technology? The set up for this that I’ve used before is that we all have privacy because there’s so many of us. You can’t follow everybody, and you can’t eavesdrop on every conversation. You can’t read email of everybody.

With AI you effectively can. Cameras can read lips. Every email can be put into a system that’s looking for – you know the set up. Every phone conversation can be voice-to-text, and then that text can be analyzed using many of the same tools we build to look for patterns and cancer cures and all of that. Do you worry about the application of the technology by governments to limit privacy?

I find it interesting that our Constitution was created largely to combat or prevent what was perceived as tyranny by the British. Those same principles will probably be very good at combating the tyranny of AI or tyranny of surveillance. I think at least in this country, our institutions have a system for protecting our privacy. Then that potentially bodes well.

It’s interesting also for me to think about, ‘Why is privacy good?’ On one hand, most people say that they prefer to have a life with privacy than a life without privacy. Okay, that’s valid by itself. Also, I wonder if there is an inherent benefit to society of having undetected crimes happening?

Let me give you a few examples; the reason for that being that sometimes laws are not just, and that’s only realized later. Let’s say, for example, homosexuality was criminalized I think in most or all of the United States as well as large parts of the world. If in a surveillance economy, if every time someone violated a law, they were hauled off to jail. Because that wasn’t the case; constituencies were able to form and eventually changed the law.

Take legalization of marijuana; if every time someone smoked marijuana they were hauled off to jail, that would make it very difficult for society to make change. There’s almost societal benefit to have the privacy to commit illegal acts. I actually feel that we have the tools in this society to set the dial of privacy where we want it. There’s only one thing that I worry about.

In recent weeks and months and years, we have all these mass shootings going on. One can imagine that we live in a society or about to live in a society where one single deranged or misguided individual can cause great havoc. If they’re killing a dozen people, that’s horrible, of course. If people are making bombs or biological agents that kill thousands or tens of thousands of people and if it only happens once where 10,000 people or 100,000 people are killed, then our society may, by consent, want to give up its privacy in order to prevent that from happening. Basically society would say that we want the government to surveil everybody just to find that one guy who’s going to go out there and kill 10,000 people. I think that’s a very real danger, and it will be interesting how that unfolds.

We had a guest on the show from argodesign named Jared [Ficklin], and he said something that I thought was very interesting. The notion of privacy is actually an industrial revolution idea that beforehand, you were born in a village. You lived in your village. There were 200, 300 people in the whole village. Everybody knew everything about everybody. You didn’t really have any expectation of privacy.

By consensus, people agreed not to mention [things]. You ran into somebody and you knew all the bad stuff about them, but you would be impolite to mention it. Then we all moved to cities and you have anonymity. You have privacy. You get used to it. Then he posits that in the future when we have no privacy anymore, we’ll go back to that system where you’ll know everything about everybody, but you’ll be polite enough not to mention it.

I think that’s basically true. When we have organization where someone can mug you and then disappear into the city, you have no idea who that person is. We’ll live without privacy. We can live in a safer world. For example, in China, their crime rates are much lower than the US. Is that a society that you want to live in or not? Maybe. Some people would say ‘yes.’

There are plenty of places that have developed technology that can surveil all the citizens and regularly do so and use it to suppress dissent. Those tools are becoming packaged and sold to oppressive regimes who then can install them and use them to silence their opposition. Is there any way out of that for those sorts of societies or is this a technology that forever empowers, as far as we can see, those in power to be able to stifle those who oppose them?

It’s very hard to imagine short of immigration – how could a surveilled society be clever enough to evade the surveillance in order to mount any sort of revolutionary effort that would be likely to succeed? Sometimes I go back and forth. I get vexed by this issue because I feel that I’m not sure whether – on one hand, sometimes I think human nature is to want to be free and not be surveilled and not be essentially ruled.

Then on the other hand, this democratic society is just a very small portion of history. Maybe that’s more natural for a lot of people. I don’t know the answer to this question, but I thought I knew it ten years ago. Nowadays as I start to see some of the things you mention, other things going on in society, I start to wonder.

What about the use of this technology in warfare? Does that concern you? I guess the setup there is: there’s probably a dozen countries that have really big, robust militaries that a large part of their national budget goes to. Each one of them doesn’t believe they can face the future where they voluntarily don’t use artificial intelligence in their weapons systems and the other 11 do. Therefore, the prisoner’s dilemma, as it were, is they all end up doing it.

Do you think, A, that that is the case, and B, does it matter? Is this technology not really a game-changer? You saw Putin’s quote, of course. It said, whoever controls the technology will rule the world. Artificial intelligence will rule the world.

Yeah, I think you really have no choice but to – I think it’s unwise not to, to just say, we’re going to opt out of the arms race and let everyone else... I think that’s self-destructive. I think it’s so hard to know the outcome. On one hand, let’s take an example where – let’s say you could have lethal drones the size of mosquitos. Some government wages war on you. All you have to do is send these mosquitoes. They can identify the leaders, the top ten leaders, kill all top ten leaders, and then you’re good. You don’t have to kill 100,000 soldiers in order to win the war. That’s a great thing. On the other hand, what are the second order effects of that? At first, I thought actually about that. I thought, that’s interesting because usually the leaders who wage war are at least risk because they’re behind the lines, or they’re in the walls of Pentagon, or wherever they are.

If the people initiating wars are actually personally at the most risk, does that change the calculus of war and actually make us a safer world? Then I think about the second order effects. If that happens, then will the tendency be to wage war in a way where you don’t identify yourself? Someone gets attacked, and you have no idea who’s attacking you, which actually sounds frighteningly similar. You get all these hacks and so forth. A lot of times you don’t know who it is. The entity waging the war disguises themselves as some other enemy, so the counter-attack is on the enemy that didn’t start it. I think we have no idea what the consequences of it are. One can hope also that the AI is not only an offensive technology, but a defensive technology. Is that enough or not to make a safe world? I don’t know.

Yeah, they’re, in a way, old issues. You just talked about battles of champions to determine – instead of the whole army, you just have a champion from each side, David and Goliath or what have you. You’ve got false flag campaigns where you stage an attack pretending to be somebody else, instigate warfare. I guess they’re all – it’s interesting, though.

Would you posit that warfare in the future isn’t actually going to be profitable? You have to, at some point, say, “What do you gain at that point?” Gone are the days you invade the country, haul off all the cattle, all the gold, and all of that. Is there profit in armed conflict between nations? Is it going to be something that we invest heavily in?

Yeah, I think that’s a valid point that there’s a – what’s the adage? No two countries that have both had a McDonalds have waged war with each other. To the extent that countries are part of the global economy, there’s a big disincentive – there’s no upside to waging war. One possible exception could be that there’s no upside for a society to wage war because of the economic consequences.

It seems like some leaders benefit personally. It helps them stay in power to wage war, on the notion that we’re swelling of national pride and things like that. That’s one exception to that. I think generally speaking, that you’re right. There’s no economic upside to war, possibly in some long-term scenario, but certainly over a period of a few decades, there doesn’t seem to be any incentive for it.

Do you hold out hope that artificial intelligence is going to solve major problems of humanity, that it’ll prolong our lives, cure aging, and help us find plentiful energy, and all of these things that are beyond our intellect so far? Are you optimistic?

I am optimistic that we’ll figure out things, whether it’ll be 10 years from now or 100 years or 200 years from now. That’s consistent with everything humanity’s achieved so far. Why wouldn’t it continue?

The traditional worries about the technology also include that we don’t really have a framework for accountability. When the artificial intelligence does something, like the self-driving car runs somebody over, in the end, who’s responsible for that? We don’t have a framework for really understanding that. I believe Mercedes has gone on the record saying that they, by policy, protect the driver of the car over any other concerns. Is that a decision that – can you foresee a future that coders are subpoenaed, and source code is analyzed for liability? Is it possible that lawyering gums up all of this stuff to a point that we can’t get through it?

Putting aside the notion that AI could actually help make the legal system much more efficient, I think that’s one of the big obstacles we face as a society. One example of that’s bandied about... is self-driving cars – you alluded to it. If self-driving cars reduce fatalities by 99 percent, does that 1 percent – who’s liable for that 1 percent, and how do we deal with it? It seems like the one possible way is that a law is passed that just basically gives amnesty to or release of liability by legislative – could be one way to handle it. This is interesting. It would seem sad to me to miss out on that opportunity to significantly improve our lives.

Yeah, it’s interesting. One self-driving car kills somebody, and it’s front page news. Yet 1,000 other ones – with people [driving], that they also were in accidents. It seems very – but the new technology needs to be held to a higher standard. It needs to be ten times better, stronger, cheaper, something like that, I think.

In my book, The Fourth Age, I really spent a lot of time trying to decide if machines can be conscious. Again, the setup is that computers right now can measure temperature, but people can feel warmth. There’s some qualitative difference between those two things. We don’t think the computer feels warmth. We call that difference – we call that ‘consciousness’ because it can experience things and not just merely sense them. You were an editor in an early version of the book. I’m curious where you came down on that. Do you believe that machines can become conscious? The question I always ask – do you believe people are machines, and therefore machines already are conscious, we are?

Yeah, it’s an interesting question. I personally don’t know, but here’s how I think about it. I personally suspect that words like ‘consciousness’ and ‘feelings’ are placeholder words. They’re not really – they’re things that people have made up to describe [experiences].. How do we know that the feeling of pain is anything more than the firing of a bunch of neurons in the pain center of the brain? If that’s true, that that’s all it is, and that pain is just the firing of certain neurons of the brain, then the brain is a machine, and is no different from a machine.

It’s very hard to know because if we – these things that we don’t quite understand what it is, a lot of times we understand them because we can’t explain them, and we develop some theory for it. The problem is, if we understood everything about the brain, about the neurons of the brain, and all the physical phenomena of the brain, and there was something that was going on that cannot be explained by the neurons of the brain, then we’d say, “A-ha! Here’s something. This is what consciousness is.” We could figure out what it is.

It’s maybe a little bit analogous to quantum mechanics. Is light – is it a particle or is it a wave? Some experiments suggest one. Some suggest the other. Then someone comes up with a theory that, hey, it’s actually both. It becomes substantiated. Now these days, we can actually make machines that are quantum machines. If indeed consciousness is something separate from the neurons of the brain, we discover that, and we figure out – and the same thing holds with this concept of the soul – we figure out what it is exactly. Maybe it’s some other force of nature like nuclear force, or electromagnetic force, or the force of gravity. How do we ever know when we don’t even understand the brain?

You said they’re placeholder words, things like consciousness and feelings. What do you mean by that? They don’t exist, and so we make up a word, or they’re words we have stuck in and placed an explanation?

I think, essentially, both. An analogy is that – before the theory of special relativity, there was all this consternation about, if light travels at a certain speed, and heavenly bodies are moving relative to each other at different speeds, what’s light moving relative towards? They came up with this concept of ether, that the whole universe was full of ether. This is what light moves relative to. Then when it was determined that that’s not really true, the notion of ether just went away.

I feel like consciousness and feeling is sort of a placeholder word. It’s a way that we use to describe brain activity that’s useful for us. I feel pain. Therefore, I’m going to walk away from the fire. It’s a useful expression, – and I don’t think we know for sure, but it’s very possible that it’s nothing more elevated than just certain mechanical activity in the head.

That’s a sad and sterile way to live your life, though, because you have to then say that that’s true about love.

Yes, I think you’d have to. You can find joys in other ways, but yes, I think if it’s – yeah, that’s a possible – or just accept it, that hey, the brain’s amazing. Actually, it may even cause you to consecrate the brain to a higher level. Maybe it’s good news. Maybe you’re not giving the brain enough respect.

In the book, I try to wrap the same issue up – or try to get at it through a discussion of ‘free will.’ Either everything in the universe is cause and effect, and you stubbing your toe is a traceable event back to the big bang, or it’s quantum. It’s just random. It just happens for no reason. We sure had some sense that we are making choices. We aren’t just – neither random nor inevitable – you choose something. Do you think that that’s an illusion that our brain just tells us to make us happy? Do you think you have free will?

Yeah, I think we – I think it was Somerset Maugham who said, “We have free will, or at least we have the illusion of free will.” Yeah, what’s wrong with free will being the result of a lot of neuronal firing in the brain that leads us to some sort of decision? What’s wrong with that? That’s free will. Maybe there’s some random elements to it. Sometimes maybe we have a dilemma, and there’s no clear answer, but it’s better to make a decision that not making any decision. You just flip a coin, or your brain flips a coin. There’s nothing wrong with that, I don’t think.

There’s a meme that I see on Reddit, and it’s got Sir Ian McKellen. I don’t even know where it came from originally, but he was looking at a card. The setup is always, you say blank, but the fact that you blank implies exactly the opposite. You say that you like dogs, but the fact that you ate a dog for lunch – that’s not one, obviously, but that’s the setup. For so many people who have that worldview, that these things are all just mechanistic, and it’s just complexity of the brain, and feelings are just – pain is just neurons firing, they had this sterile way of looking at everything. Then they act completely different on a day-to-day basis. They act like life is full of choices, passion, and excitement. It’s such a big disconnect. If you have no real choice, and the people you love, you don’t really love them. It’s just a series of chemicals that are released, neurons firing, and endorphins. You don’t really love them. All of the rest – if we know that, we sure don’t act that way, even people who believe that.

Even Mr. Spock liked people, and had friends, and all of that. How do you do that? Do you think you can’t actually – if you think all of that’s true, people are machines, everything else is just – if you think that you can’t actually live your life that way, you’d just go crazy, or what? I always assume if there’s that disconnect, then people say they believe one thing, but you look at their actions, and it reveals they really believe something else.

I don’t think there’s a disconnect. I don’t think there’s inconsistency. I think that the brain is such a – I’m not saying that it’s impossible that there’s not a consciousness, or a spirit, or a soul. I’m just saying that it’s impossible to know whether it’s just the brain or it’s something else. If it’s just the brain, then let the brain be. The brain’s a glorious thing. We should celebrate it. If the brain is part of or the enabler of love and everything else wonderful about life, that, in my mind, is no less reason to celebrate it.

Do you think this singularian view, that if you knew the position of every neuron and all of this, that you would be able to upload yourself into a machine, or Elon Musk’s thing, where you can actually connect your brain and interface directly with the machine – do you think those things are A, possible, and B, good?

I don’t know. I don’t know. It does offer immortality.

Immortality can be eternity in heaven or eternity in hell.

Right, that could be a bad thing. One thing I also wrestle with is, how much of intelligence – is intelligence, human intelligence, or any intelligence, driven by having a purpose? For example, let’s say a human has a purpose to survive, to propagate its genes either through reproduction or supporting other people who share genes with you, your family members, and so forth. Maybe there are other purposes to us as humans. Does having a purpose – is that important for – because otherwise, why have intelligence, if there’s not some purpose for it? I wonder whether, if you’re uploaded to a machine and you essentially become immortal, then what is your purpose? Does that take away some of your purpose? Is that a problem? Maybe it’s not a problem. I don’t know.

Some people would say there’s plenty of non-intelligent life that does very well – grass. Grass seems to grow, get sunlight, and it has no purpose. It has no intelligence, but it still lives. Then there are things that sometimes just tag along with life. Why is blood red? It just ‘is’ because the redness came along with something else. It could just be, intelligence came along with something else. I don’t think any of that. I think we use so much of our brains. Twenty percent of all of our calories go to power our intellect. That’s such an overhead. I read recently that for babies, it’s 80 percent. Babies have the little stubby arms, but they’re not powering their bodies with hardly anything.

That’s interesting.

You just don’t think that it would be that expensive from a caloric standpoint, and survive without serving a purpose. Where do you think – with your definition of the nematode worm – with your definition of intelligence, you extend it down to the nematode worm. You say that if you behave intelligently, you’re intelligent. Taking a higher bar because it could very easily be – I guess by that standard, all kinds of other things, even non-animals are intelligent – trees or vines that go up looking for sunlight.

It could be. I don’t know more of the behavior of trees, but I think under this definition, the tree would have to be able to learn new things. The environment changes, and a tree has to – or the rules of life change, or the rules of the environment change, and the tree has somehow learned from that and adjust its behavior. It could be that the tree is intelligent under that definition. I’d just have to understand the mechanism –

I’m just curious if, all of a sudden, a tree lived in a place, and for some reason, it stopped getting nearly as much rain there, for some reason, over a long... 100 years. The trees successively become smaller, or their leaves get bigger, or their root system gets more efficient. Most people would say that that’s natural selection. The ones that could already do it very well survived, and the ones that couldn’t live on less water didn’t. Is that intelligence?

If you’re saying that there are 1,000 trees, then that’s a different thing. If you’re saying that there’s one tree and that tree figured out how to develop bigger leaves and to capture water on its own, then I would say, that tree’s intelligent.

I’ll ask you my final question, which is – we talked about some dark things. We’ve talked about some positive things. Are you, in the end, an optimist, or are you like, ‘I don’t know if the future’s going to be better, worse, or not. I have no idea.’?

I’m an optimist. I don’t know how much of it is true belief, versus that it’s just more satisfying to be an optimist in life than a pessimist. It’s hard for me to tease those out from each other. Essentially, I feel that throughout all of human history, we’ve become better. Humans have become better. Society has become better. Yes, there have been a lot of zig-zags. There’s still a lot of injustice. I don’t see any reason why that would cease to continue to happen, even with something like artificial intelligence.

All right, well, it has been a fascinating near hour. I want to thank you for being on the show.

Thank you for having me, Byron.