Episode 89: A Conversation with Doug Lenat

In this episode Byron speaks with Cycorp CEO Douglas Lenat on developing AI and the very nature of intelligence.

:: ::

Guest

Douglas Lenat is one of the pioneers of Artificial Intelligence. He received his PhD in Computer Science from Stanford, investigating machine learning and automated discovery based on "interestingness" heuristics, for which he received the 1977 IJCAI Computers and Thought Award. After professoring at Stanford in the 1970's and early 80's, he founded the Cyc Project in 1984 to address the limitations he saw in slow symbolic logic-based expert systems and in fast but shallow neural net-based machine learning. At the end of 1994, he founded Cycorp and has served as its CEO ever since. Dr. Lenat is a Fellow of the AAAS, AAAI, and the Cognitive Science Society, and an editor of the J. Automated Reasoning, J. Learning Sciences, and J. Applied Ontology. He is also the only individual to have served on the Scientific Advisory Boards of both Microsoft and Apple.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, and I'm Byron Reese. I couldn't be more excited today. My guest is Douglas Lenat. He is the CEO of Cycorp of Austin, Texas where GigaOm is based, and he's been a prominent researcher in AI for a long time. He's been awarded the biannual IJCAI computer and thought award in 1976. He created the machine learning program AM. He worked on (symbolic, not statistical) machine learning with his AM and Eurisko programs, knowledge representation, cognitive economy, blackboard systems and what he dubbed in 1984 as "ontological engineering."

He's worked in military simulations, numerous projects for the government for intelligence, with scientific organizations. In 1980 he published a critique of conventional random mutation Darwinism. He authored a series of articles in The Journal of Artificial Intelligence exploring the nature of heuristic rules. But that's not all: he was one of the original Fellows of the Triple AI. And he's the only individual to observe on the scientific advisory board of both Apple and Microsoft. He is a Fellow of the Triple AI and the cognitive science society, one of the original founders of TTI/ Vanguard in 1991. And on and on and on... and he was named one of the WIRED 25. Welcome to the show!

Douglas Lenat: Thank you very much Byron, my pleasure.

I have been so looking forward to our chat and I would just love, I mean I always start off asking what artificial intelligence is and what intelligence is. And I would just like to kind of jump straight into it with you and ask you to explain, to bring my listeners up to speed with what you're trying to do with the question of common sense and artificial intelligence.

I think that the main thing to say about intelligence is that it's one of those things that you recognize it when you see it, or you recognize it in hindsight. So intelligence to me is not just knowing things, not just having information and knowledge but knowing when and how to apply it, and actually successfully applying it in those cases. And what that means is that it's all well and good to store millions or billions of facts.

But intelligence really involves knowing the rules of thumb, the rules of good judgment, the rules of good guessing that we all almost take for granted in our everyday life in common sense, and that we may learn painfully and slowly in some field where we've studied and practiced professionally, like petroleum engineering or cardiothoracic surgery or something like that. And so common sense rules like: bigger things can't fit into smaller things. And if you think about it, every time that we say anything or write anything to other people, we are constantly injecting into our sentences pronouns and ambiguous words and metaphors and so on. We expect the reader or the listener has that knowledge, has that intelligence, has that common sense to decode, to disambiguate what we're saying.

So if I say something like “Fred couldn't put the gift in the suitcase because it was too big,” I don't mean the suitcase was too big, I must mean that the gift was too big. In fact if I had said “Fred can't put the gift in the suitcase because it's too small” then obviously it would be referring to the suitcase. And there are millions, actually tens of millions of very general principles about how the world works: like big things can't fit into smaller things, that we all assume that everybody has and uses all the time. And it's the absence of that layer of knowledge which has made artificial intelligence programs so brittle for the last 40 or 50 years.

My number one question I ask every [AI is a] Turing test sort of thing, [which] is: what's bigger a nickel or the sun? And there's never been one that's been able to answer it. And that's the problem you're trying to solve.

Right. And I think that there's really two sorts of phenomena going on here. One is understanding the question and knowing the sense in which you're talking about ‘bigger.’ One in the sense of perception if you're holding up a nickel in front of your eye and so on and the other of course, is objectively knowing that the sun is actually quite a bit larger than a typical nickel and so on.

And so one of the things that we have to bring to bear, in addition to everything I already said, are Grice's rules of communicating between human beings where we have to assume that the person is asking us something which is meaningful. And so we have to decide what meaningful question would they really possibly be having in mind like if someone says "Do you know what time it is?" It's fairly juvenile and jerky to say “yes” because obviously what they mean is: please tell me the time and so on. And so in the case of the nickel and the sun, you have to disambiguate whether the person is talking about a perceptual phenomenon or an actual unstated physical reality.

So I wrote an article that I put a lot of time and effort into and I really liked it. I ran it on GigaOm and it was 10 questions that Alexa and Google Home answered differently but objectively. They should have been identical, and in every one I kind of tried to dissect what went wrong.

And so I'm going to give you two of them and my guess is you'll probably be able to intuit in both of them what the answer, what the problem was. The first one was: who designed the American flag? And they gave me different answers. One said "Betsy Ross," and one said “Robert Heft,” so why do you think that happened?

All right so in some sense, both of them are doing what you might call an ‘animal level intelligence’ of not really understanding what you're asking at all. But in fact doing the equivalent of (I won't even call it natural language processing), let's call it ‘string processing,’ looking at processed web pages, looking for the confluence, and preferably in the same order, of some of the words and phrases that were in your question and looking for essentially sentences of the form: X designed the U.S. flag or something.

And it's really no different than if you ask, “How tall is the Eiffel Tower?” and you get two different answers: one based on answering from the one in Paris and one based on the one in Las Vegas. And so it's all well and good to have that kind of superficial understanding of what it is you're actually asking, as long as the person who's interacting with the system realizes that the system isn't really understanding them.

It's sort of like your dog fetching a newspaper for you. It's something which is you know wagging its tail and getting things to put in front of you, and then you as the person who has intelligence has to look at it and disambiguate what does this answer actually imply about what it thought the question was, as it were, or what question is it actually answering and so on.

But this is one of the problems that we experienced about 40 years ago in artificial intelligence in the in the 1970s. We built AI systems using what today would be very clearly a neural net technology. Maybe there's been one small tweak in that field that's worth mentioning involving additional hidden layers and convolution, and we built a AIs using symbolic reasoning that used logic much like our Cyc system does today.

And again the actual representation looks very similar to what it does today and there had to be a bunch of engineering breakthroughs along the way to make that happen. But essentially in the 1970s we built AIs that were powered by the same two sources of power you find today, but they were extremely brittle and they were brittle because they didn't have common sense. They didn't have that kind of knowledge that was necessary in order to understand the context in which things were said, in order to understand the full meaning of what was said. They were just superficially reasoning. They had the veneer of intelligence.

We might have a system which was the world's expert at deciding what kind of meningitis a patient might be suffering from. But if you told it about your rusted out old car or you told it about someone who is dead, the system would blithely tell you what kind of meningitis they probably were suffering from because it simply didn't understand things like inanimate objects don't get human diseases and so on.

And so it was clear that somehow we had to pull the mattress out of the road in order to let traffic toward real AI proceed. Someone had to codify the tens of millions of general principles like non humans don't get human diseases, and causes don't happen before their effects, and large things don't fit into smaller things, and so on, and that it was very important that somebody do this project.

We thought we were actually going to have a chance to do it with Alan Kay at the Atari research lab and he assembled a great team. I was a professor at Stanford in computer science at the time, so I was consulting on that, but that was about the time that Atari peaked and then essentially had financial troubles as did everyone in the video game industry at that time, and so that project splintered into several pieces. But that was the core of the idea that somehow someone needed to collect all this common sense and represent it and make it available to make our AIs less brittle.

And then an interesting thing happened: right at that point in time when I was beating my chest and saying ‘hey someone please do this,’ which was America was frightened to hear that the Japanese had announced something they called the ‘fifth generation computing effort.’ Japan basically threatened to do in computing hardware and software and AI what they had just finished doing in consumer electronics, and in the automotive industry: namely wresting leadership away from the West. And so America was very scared.

Congress passed something that's how you can tell it was many decades ago. Congress quickly passed something, which was called the National Cooperative Research Act, which basically said ‘hey all you large American companies: normally if you colluded on R & D, we would prosecute you for antitrust violations, but for the next 10 years, we promise we won't do that.’ And so around 1981 a few research consortia sprang up in the United States for the first time in computing and hardware and artificial intelligence and the first one of those was right here in Austin. It was called MCC, the Microelectronics and Computer Technology Corporation. Twenty five large American companies each contributed a small number of millions of dollars a year to fund high risk, high payoff, long term R & D projects, projects that might take 10 or 20 or 30 or 40 years to reach fruition, but which, if they succeeded, could help keep America competitive.

And Admiral Bob Inman who's also an Austin resident, one of my favorite people, one of the smartest and nicest people I've ever met, was the head of MCC and he came and visited me at Stanford and said “Hey look Professor, you're making all this noise about what somebody ought to do. You have six or seven graduate students. If you do that here if it's going to take you a few thousand person years. That means it's going to take you a few hundred years to do that project. If you move to the wilds of Austin, Texas and we put in ten times that effort, then you'll just barely live to see the end of it a few decades from now.”

And that was a pretty convincing argument, and in some sense that is the summary of what I've been doing for the last 35 years here is taking time off from research to do an engineering project, a massive engineering project called Cycorp, which is collecting that information and representing it formally, putting it all in one place for the first time.

And the good news is since you've waited thirty five years to talk to me Byron, is that we're nearing completion which is a very exciting phase to be in. And so most of our funding these days at Cycorp doesn't come from the government anymore, doesn't come from just a few companies anymore, it comes from a large number of very large companies that are actually putting our technology into practice, not just funding it for research reasons.

So that's big news. So when you have it all, and to be clear, just to summarize all of that: you've spent the last 35 years working on a system of getting all of these rules of thumb like ‘big things can't go in small things,’ and to list them all out every one of them (dark things are darker than light things). And then not just list them like in an Excel spreadsheet, but to learn how to express them all in ways that they can be programmatically used.

So what do you have in the end when you have all of that? Like when you turn it on, will it tell me which is bigger: a nickel or the sun?

Sure. And in fact most of the questions that you might ask that you might think of as any one ought to be able to answer this question, Cyc is actually able to do a pretty good job of. It doesn't understand that unrestricted natural language, so sometimes we'll have to encode the question in logic in a formal language, but the language is pretty big. In fact the language has about a million and a half words and of those, about 43,000 are what you might think of as relationship type words: like ‘bigger than’ and so on and so by representing all of the knowledge in that logical language instead of say just collecting all of that in English, what you're able to do is to have the system do automatic mechanical inference, logical deduction, so that if there is something which logically follows from one or two or 2,000 statements, then Cyc (our system) will grind through automatically and mechanically come up with that entailment.

And so this is really the place where we diverge from everyone else in AI who's either satisfied with machine learning representation, which is sort of very shallow, almost stimulus response pair-type representation of knowledge; or people who are working in knowledge graphs and triple and quad stores and what people call ontology is these days, and so on which really are almost, you can think of them like three or four word English sentences and there are an awful lot of problems you can solve, just with machine learning. T

There is an even larger set of problems you can solve with machine learning, plus that kind of taxonomic knowledge representation and reasoning. But in order to really capture the full meaning, you really need an expressive logic: something that is as expressive as English. And think in terms of taking one of your podcasts and forcing it to be rewritten as a series of three word sentences. It would be a nightmare. Or imagine taking something like Shakespeare's Romeo and Juliet, and trying to rewrite that as a set of three or four word sentences. It probably could theoretically be done, but it wouldn't be any fun to do and it certainly wouldn't be any fun to read or listen to, if people did that. And yet that's the tradeoff that people are making. The tradeoff is that if you use that limited a logical representation, then it's very easy and well understood to efficiently, very efficiently, do the mechanical inference that's needed.

So if you represent a set is a type of relationships, you can combine them and chain them together and conclude that a nickel is a type of coin or something like that. But there really is this difference between the expressive logics that have been understood by philosophers for over 100 years starting with Frege, and Whitehead and Russell and so on and and others, and the limited logics that others in AI are using today.

And so we essentially started digging this tunnel from the other side and said “We're going to be as expressive as we have to and we'll find ways to make it efficient,” and that's what we've done. That's really the secret of what we've done is not just be massive on codification and formalization of all of that common sense knowledge, but finding what turned out to be about 1100 tricks and techniques for speeding up the inferring, the deducing process so that we could get answers in real time instead of involving thousands of years of computation.

So slow it down for me from computer speed to human speed. I want to go back to the thing I started to set up which was: who designed the American flag? And one said "Betsy Ross" and one said "Robert Heft." And the reason they differed is one gave me the 1776 flag and one gave me the 50 star flag we have today.

Similarly, I asked both of these systems, how many minutes are in a year? That seems to be very unambiguous, but they gave me completely different answers. And the reason being, is one used the calendar year 365 days and one used 365.24 days, a solar year.

So how would Cyc take that question and disambiguate it because one, it isn't clear which one I was even asking to begin with, so I ask a bad question about minutes a year, and so kind of just walk me through it like you were going through steps in a program. How does it get from ‘oh a year can mean two different things’? Because it doesn't seem like a year could mean two different things.

So in a way, the answer is potentially uninteresting in that there's there's no magic here. What you need to have is a lexicon, which we do, which essentially says here are words in a natural language. And most of Cyc's lexical knowledge is about English. So Cyc knows that the year has multiple denotations, and in fact, for that matter, a minute has multiple denotations and those denotations are the unambiguous logical concepts which are quite distinct from each other. And so the idea of minutes as a unit of time, minutes as a unit of angular arc, and so on, may be ambiguous as English words, but they're not at all ambiguous at the logical level. You have different terms, different logical terms for those, and so in a way the questions become less cute when you look at the logical form.

So we have one of the questions Cyc can answer is: can a can cancan? Which is an adorable type question, but it in a way becomes less interesting when you look at how it does that because it essentially disambiguates the first can as being a tin can. And then the second can as being is skill capable of and the third and fourth cans as being a referent of cancan dancing and so on. And so then once you have an unambiguous logical representation, Cyc basically looks at, in this case, the skill capable rules it knows, and it knows things like: if something is going to be doing dancing, then it needs legs and it also needs effectively a brain. And since tin cans don't have legs and tin cans don't have brains, there are two good reasons why they can't cancan.

But to just stop there for a second. It seems like it kind of rabbit holes down forever because dancing could be like you ‘dance around’ an issue. You don't actually need legs to do it and it's metaphorical, and I mean I wonder: Do you think this is how humans learn common sense? Is that we just know a bazillion different things like ‘big things don't go in small things?’

Let me answer your question because in a sense, really the answer is sort of a clear yes and a clear no. So one type of answer to give you is that once you've disambiguated things like dancing, Cyc knows that cancan dancing refers to something which is a type of or a style of dancing the human physical activity or the human recreational activity. And so at that level, there's no ambiguity. In other words, once you have figured out what the meaning of cancan is, then the fact that in English we happen to have multiple meanings for the word ‘dance’ doesn't even come into the equation or the calculation.

So you've essentially transformed your query into an unambiguous form and all the knowledge that Cyc has is unambiguous. It's independent of language, independent of English and the fact that in English we use the word dance to mean two or three very different types of things is sort of unimportant. And yes, Cyc happens to know that the English word ‘dance’ has these different denotations, but the knowledge it has like: ‘as a social activity requires legs’ whereas dancing about an issue doesn't and so on. That knowledge is in Cyc,, but it has already disambiguated what type of dancing you meant as soon as you figured out what the referent of cancan was.

And then the other side of the question that you asked was, “is this related to how humans develop common sense and develop intelligence?” And for that I think it would be at the epitome of hubris for me to say ‘yes’ to that. It would be a lot like people in the 1600s and 1700s confidently proclaiming that clockwork automata explained how human intelligence works and so on.

I think that we understand a lot, but there's an awful lot we don't understand about both how intelligence works in humans and how it develops in humans. And so I would say our goal is not to try to shed light on that. It's to create mental prostheses, mental amplifiers that work along with human beings to make people more creative, enable people to solve harder problems, do more in parallel, mis-communicate with each other less and so on. Just like the electrification process and the electric appliances that came out 100 years ago certainly don't wash clothes the way that humans wash clothes and so on. But that's fine, they amplify what we're able to do with our muscles so we don't have to do things the same way we used to.

But I've always had a suspicion... I mean you've spent nearly half a century working on this. And I have a big suspicion so I'm not even asserting this is of the same order, but when you take a little kid and you show them some cats, like they see cats in the picture books, and then you're out one day and you see one of those manx cats without a tail. They say “oh look there's a kitten without a tail.” And yet somehow they know it retains enough of this cat-ness thing that it's still a cat, even though every cat I've ever seen had a tail. But I know the tail isn't actually a requirement for cat-ness.

And I've always had this vague hunch that the way we do everything is we don't actually have a list of rules, an infinite list of ‘big things can't go on small things’ kind of knowledge. But we have these vague relationships between everything we know and everything else that we kind of seamlessly take information from one area and apply it to the other. And it's all this kind of amorphic... and we actually don't know explicitly big things can't go in... I mean that isn't really the core of common sense.

So that's why I was asking you “is that what people do?” Because it's perfectly fair to say “I don't know how people do it, but we're trying to duplicate that ability” the way we know how to read a computer, or you're saying we think this is sort of what people do in a way is that people kind of learn all these rules of thumb and then they kinda know when to apply them and they just disambiguate naturally?

Well if you forced me to answer which in effect you've done, I would say that I think that what's really going on in people's minds is some kind of sophisticated simulation: namely we're effectively manipulating models which are the representations and the analogues of the physical objects and the processes and so on that occur in the real world.

And humans are very very good at without even consciously realizing what they're doing manipulating those models in order to come up with answers. But I really think that what we're trying to do is to build an artifact that will be a useful appliance for people, to make people smarter, and that has been our goal all along. It hasn't been to try to explain human intelligence or the human development of intelligence or anything like that.

I think that what we want to know is: can these devices approximate the ability of humans? Or are we doing something so alien to them, they're always going to be kind of embarrassingly clunky?

Well I would say there is a clear answer to you. You probably didn't think there was gonna be a clear answer to that, but there is. And the answer is the following: if you ask people to introspect and articulate as... in English sentences, how they're able to do something, people are actually able to do that. And the trouble is that we're able to fool ourselves, so that sometimes we can articulate things that turn out to be not actually an adequate explanation.

One of my favorite AI researchers from my era was William A. Woods, Bill Woods from BBN. And he did a system, a project called LUNAR, which was speech recognition, a speech understanding system. What he did was he got people to introspect on how they understood spoken language, and people are very good at in fact articulating that. And then he built a program based on that and it was a terrible disaster. It was a complete flop. And it turned out, people were just making up rationalizations for something that you and I have no real access to, which is: how do we really understand spoken language?

And so in cases where you get people to articulate things, you program them and they flop, that means that was a bad task to approach that way. And yet there are an awful lot of tasks where if you get people to articulate... especially if you're talking about experts articulating in the context of some case or some problem that they're working on and so on, the rules that they give you are actually sufficient, are actually adequate to build a program which is competent at doing that same task.

And ones which involve perception like speech recognition or image recognition and so on, people are notoriously now very bad at introspecting on how they actually do it. But there are other tasks, mostly the complicated ones, not the ones that everyone just automatically does all the time, but the complicated ones that involve expert knowledge, that involve things, skills and techniques they had to learn over time, and so that doesn't involve perceptual motor coordination and so on that people actually do reliably introspect on, and build systems that contain those rules of thumb, they actually do a pretty good job.

And one of the powerful effects of building AI systems that way, instead of say using machine learning, is that when the system gets something wrong, the person who's helping you build it can look at the step by step logical reasoning path, and see exactly where it went wrong. Then they'll say “Oh yes I forgot to tell you: blah blah blah” Then you fix up, add a new rule or whatever, rerun it and it gets the right answer. So incrementally you can get these systems to be smarter and smarter and to be more and more competent.

But I guess where I would go from that is if you would ask me what's the biggest problem in AI? I would have said ‘common sense.’ The systems are just too brittle and they don't, they can't even do the most basic thing a child can do. That's kind of the number one problem. And so let's say you've solved it. And let's say more than that: you get another thousand man years or ten thousand and make it even richer and make it that thing you've dreamed about for so long.

What do you think you had at that point? Do you have the basis for a general intelligence? Do you have something that you could build creativity on top of that? Or do you have Jarvis, you know do you have the absolute best Alexa that that the universe could imagine? Like what in the end, what is that a building block of, or is that an end unto itself?

I think that there will be end points or leaves of that or ‘fat boundary’ of that which could always be improved or which can't actually function. So we're not focusing on image understanding or speech understanding and so on, so our system will be effectively blind and deaf and so on. But if you ignore those perceptual motor type skills and tasks and questions, then there's no reason why the system can't, and in fact to a large extent already is able to answer questions and solve problems at the level that you would expect humans to, to disambiguate ambiguous things, to interpret what people must have meant, to infer what the context of something must be, so you can tell if someone is making a joke or being sarcastic or hyperbolic or lying to you or telling you something that they believed to be true at some point in the past or whatever. So there's certainly no limit. There's certainly no special boundary around creativity.

Cyc has made creative conclusions and hypotheses for decades, so while we might like to think of creativity as uniquely human, there's actually nothing particularly mystical about it necessarily. Some aspects of creativity are things that are only open to a very small number of people, and in some cases you can't articulate it, and then eventually you find someone who can.

Giotto about 700 years ago was able to create the illusion of linear perspective in his paintings without actually understanding how he did it, and he could pass that on through apprenticeship to other people. But 100 years later, Brunelleschi and others had worked out how exactly you do that with perspectives and horizon lines and so on and they were able to transform that from an ill understood, mysterious ability to something which the average 7 or 8 year old could actually learn today.

Well that's a pretty bold statement, that there's nothing particularly special about creativity. So let's take a minute and look there because I think to many people when they think about Banksy's graffiti or J.K. Rowling's Harry Potter or the Broadway show Hamilton, they have a hard time seeing those things which they believe are inspirational and creative, being as mechanistic, as you seem to be suggesting. So kind of make that case that those there's nothing really special about that.

Well again it depends on what people are capable of introspecting and articulating. So if you pick as you did some of the world's best and most rare examples of creativity, you know most people alive today couldn't do that and don't do that and so on. But there are small scale instances of creativity that abound all the time in our own coping with the real world. There is a traffic obstacle up ahead and we have to creatively think about how we're going to cope with that, or creatively think of an alternate route. And so there are a myriad, a plethora of what you might think of as small demands on you to be creative just to live your life, just to get by the day every day. And those are the kinds of creativity that we can articulate. If someone asks us how we were able to come up with that, we often can figure out what we did, how we did it, and that's the kind that we can and have been imparting to our system.

It's funny you saying that about... I picked the rarest forms that it reminds me of the movie I, Robot where the Will Smith character who doesn't like robots is talking to the robot buddy and he's down on robots. He says "Can a robot write an opera? Paint a masterpiece?" And Sonny says, "Can you?"

Right, exactly. And so that doesn't mean by the way that we can't have programs that are extraordinarily creative. It just means that we have to find the right vocabulary to talk about the rules and the breaking of the rules.

There was a Google front piece app a couple of weeks ago that let people type in

a melody and then it would orchestrate that the way that Bach would have orchestrated that. But if you actually look online, there were two communities, one of which was gushing about that, but one of which was musicologists and professional musicians who essentially said the compositions sounded like fingernails on a chalkboard to them because the rules that Bach followed and the meta rules for when to break those rules and when not to, and so on weren't being followed at all. It was some kind of statistical learning thing which, to you or I would have sounded like Bach, but to people who actually know Bach sounded like essentially torture.

But you could be right. I remember reading though that Yo-Yo Ma played a concert at Steve Jobs’ house for him and his friends, and Steve Jobs remarked that it was the best argument he ever heard for the existence of God. That his performance was... that there was something transcendent about the performance.

But I guess your argument would be: well if you had a sufficiently smart Yo-Yo Ma he could deconstruct that and explain it, demystify it, and then it can be instantiated in a program, coded, and then the computer could make the same argument.

And with a lot of the most creative people on earth, like for example Giotto, they aren't able to explain how they're able to be creative and so there's no actual, even approach to mechanizing it.

I've read that when we try to learn how doctors diagnose things, the better the doctor, the less able they are to explain why because they're not just kind of following an expert system: ‘if, then…’ in their head, it's really subtle. So my schtick, my area of interest is consciousness, and consciousness is, just to be on the same page with terms, consciousness is something, it's the experience of being you, a computer can right now measure temperature but it cannot feel warmth.

You for instance have many times said the computer can ‘understand’ you. I'm not sure computers can understand anything. We use it colloquially but the difference between measuring temperature and feeling warmth, that's consciousness and that's having an experience of the world. And I've always suspected that that experience of the world is how we build that model you were talking about earlier. How we model the world is because we experience it.

So would a sufficiently adept Cyc instantiation or even any other computer instantiation ever, in your mind, reach consciousness? Could it achieve that? Is that an emergent property or is it something that can never come out of a fab,all the ones and zeros arranged in any combination you want from here to eternity? It will never actually experience the world, it can only ever measure it.

Well I mean it's sort of a philosophical chestnut by now that... how do I know that you're really conscious and so on and on to some extent, it doesn't really matter whether the person is the only real conscious being in the world or whether all the other people really are conscious, or whether other people and computer programs are conscious. What matters is how they act and how they behave and how they interact.

And so if someone acts as though they were conscious, if someone answers questions correctly as though they were conscious, if someone takes decisions that reflect what you would ascribe to being conscious and so on, then what does it really matter whether they, quote unquote, are really conscious or not?

Let me take that on. If I built an ever more sophisticated robot that could disarm bombs, the minute the robot experiences the world and feels pain, I can't send him in there to blow up the bomb. His life is as precious as a human, if he says I'm conscious, I'm conscious, I'm conscious, but he is just an automata, just as in the 17th century, then you just ignore it because just like you could make a tape recorder say “I'm conscious, I'm conscious, I'm conscious.”

So I think it's the only question that matters really because it will speak to whether computers in the end have rights, whether they are things or they are beings and that to me is a huge question.

Oh I couldn't agree more. I think that gradually, we will develop AIs where it is more and more and more difficult to decide morally, ethically, does it matter if we just turn this off or if we give it some dangerous job and so on? And I think that's going to be one of the issues that we have to grapple with in the coming decades... is that we're going to have AIs which essentially do complain and beg and so on.

That reminds me of a great story that Marvin Minsky told me, my late friend and colleague, about when he was at Lincoln Labs about 50 years ago. And in those days computer time was so precious that you submitted a deck of computer cards and the very first card said ‘how many CPU seconds to allow the program to run?’ And so he built a program that essentially would beg for time. So it would say ‘give 30 seconds’ on the job control card, but then once it started, all it would do is sit there for 15 seconds doing nothing. Then it would ring a bell on the Teletype console in the machine room and call the operator’s attention and say ‘I need 20 more seconds please.’ Then it would just sit there for another 15 seconds and do that again and say ‘I need another minute please.’ And so at the end finally after like half an hour, the operator just killed that particular job. And Marvin would storm into the poor operator’s room and say “Hey I put 15 seconds on the job control card. You're charging me for half an hour of CPU time,” and the poor operator would say “well your program kept asking for it,” and Marvin would say, "it always does that."

Bill it instead.

Right. Exactly. So that's a particularly simple case where there is the perception of consciousness, the perception of pain, the perception of emotion and because you and I happen to know what the algorithm was in the program, we don't feel particular sympathy or empathy for the program being turned off after 30 minutes or whatever and so I think it really is a very complicated issue, complicated ethical and moral issue.

One of the things that argues to some extent for treating AIs differently than we treat conscious people or animals that can feel pain is that at least in principle, any AI in that situation could be stored, backed up, duplicated, copied and so on. In that sense you aren't really killing it any more than in, say a Star Trek episode, the transporter is killing the person who's being transported.

So tell me a little bit more.Let's switch and put on a business lens real quickly. Is Cyc like a product, is it a technology? Is it something that is licensable? Is it something that's going to be in Walmart by Christmas? From a business standpoint, what is it and where is going, and how does all that work?

Yes. So I as you could tell, have no compunction about talking at length about what we're doing and I'm happy to talk about that as well. So from my point of view, Cyc is a technology. But from the point of view of companies interacting with Cycorp, the right way to think of it is that we have a technology which sits at the center of products and services that we offer, and so we will partner with typically a large organization in order to take some problem that they're currently doing, in some ways not unlike what the old expert systems technologies did 40 years ago.

But instead of being brittle, we have a flexible common sense solution as well and so using knowledge of the domain, using knowledge of common sense, using aligned or mapped ontology to database schema, transformations and so on, we basically explain what this database contains and how to access it. We do that for all of the internal and external data sources that that company is using in that application, and then based on that application, typically the next application with that company draws on all of that plus a little bit more and so on. So there's a kind of ‘knowledge network effect’ where companies that use the technology find that extending it to the next application and the next and the next, goes faster and faster.

So with one organization, we do IT provisioning, and then based on that, they realized that we could put in information about different people's roles in the company so it could make recommendations based on that. We could put in rules about what people should and shouldn't be doing, so we could do a kind of compliance function and detect things like insider trading or violation of policies or laws and so on.

So gradually we build up a model of an enterprise and hopefully become at the center of the way that enterprise models its own functions and its own structure and its own functioning. So that's typically what we do, but I can see more and more widespread deployment of this in people's lives in the near future. And I can't say more at this time, but I think that your prediction about Cyc doing something at Wal-Mart by Christmas, if you want, let's come back to that at Christmas.

All right. When I read your intro, you not only have been in the industry a long, long time, but you've had accolades since the very beginning. And I'm curious: did you know John McCarthy who coined the term ‘AI?’ Did you know Doug Engelbart, who did the the mother of all demos? Like all of the early people that were, oh you know Claude Shannon, all these early people who were around and kind of still touch our time. I'm curious: did you have experiences with any of them?

Yes, with most of them. And out there I would also rank people like Alan Kay and Allen Newell and Herb Simon and others, and many of the people you're talking about actually were very enthusiastic about Cyc project because, for example, John McCarthy ten years before I started it, was also trying to sound a clarion call that someone needed to do something like this. So he was very happy when we started Cyc... and in fact ‘off and on’ he actually helped with the project and consulted with us on it, and the same with Marvin Minsky, and the same with many of the others that you mentioned. But you know, in some ways, it was a much smaller, much closer knit community in those days, and so almost everybody knew almost everyone.

So who was the smartest? I'm trying to remember this. I'm going to get it wrong. Either Minsky said Asimov and Carl Sagan were the two smartest people he knew, or Asimov said Minsky and Carl Sagan were. So like when you think about the intellectual giants, all these people you come across, who sticks out in your mind as like, they were a genius? They were truly like a genius of the ages?

So the two people, the more I think about it, the more people I want to add to this list, but I would say that Ed Feigenbaum was in a way the most thoughtful and diligent as well as creative person in terms of thinking about what needed to be done and putting in the effort that was necessary to do it, even if it took years or decades. And he instilled that in me that you know in your lifetime you're only going to have the chance to do one or two or three big projects that might really change the way the world works. And so when you get that opportunity, don't let it go by and don't stint from persevering with it until it's done.

And so that really has influenced a lot of the way we have operated in the last thirty five years. We've kept a very low profile. We don't write a lot of papers or go to conferences or seek any kind of recognition in the field. Instead we're sort of getting our work done as quickly and as correctly as we as we can.

The other person I was thinking of was Marvin Minsky, who was kind of playful and was kind of expert at knowing what rules to break to be creative, and I was constantly amazed both by what he would come up with, and also how he would inspire creativity. So his PhD students would come to him and they discussed some thesis idea and Marvin would say something and they wouldn't understand him, and it would turn out that Marvin really didn't know what he was suggesting exactly. But the bright students’ belief that he must have had some good idea that he was trying to impart to them, in most cases would actually force them to come up with some good idea, which is why he said he was the advisor on so many brilliant PhD theses.

He would always just say something cryptic like, "well I believe you should always plant seeds and you should always water them."

Yeah. Like Chance the Gardener.

Exactly. I wasn't going to say it, but that's what it reminded me of.

The other person that I would add to that list is Alan Kay. Someone who was not just deep, but also very very broad in terms of what he attended to, what he forced himself to learn and think about and so he would think of the big picture, and very often, as I was doing things, both as a student and as a professor and working on Cyc, Alan is the one who would come up with the broader context in which I should look at what I'm doing and evaluate what I'm doing. In some ways that's a function that you're playing through these podcasts, is getting people to take a half step back and think about what they're doing, not just because they need to do it.

Did you know Weizenbaum?

Yes, and let's just say that there are several people who believed that their paradigm was correct, and it was very hard to get them to see beyond the fringe of what those, almost religious beliefs penned them in, and fortunately our paradigm is right. But, so there are a lot of people. Penrose is another example, who believed that his paradigm is the right paradigm. Chomsky's actually an even cuter example because Chomsky was wise enough to change his mind a couple of times, but he left in his wake, entire subfields of linguistics that didn't change their mind, and then became dissatisfied with Chomsky.

Well you know Minsky of course, they thought Dartmouth was only going to take a summer in 1956 because they thought intelligence must have two or three basic laws like motion and electricity and all that. And of course he ended saying “it's a hundred things,” and so clearly a lot of evolution...

It's funny, Bob Metcalf lives here in town and I mentioned Weizenbaum to him because I was talking about Eliza, and he said “ahh yes, Weizenbaum... I took Comp 642 from him,” and I think to myself, ‘that was 50 years ago, I can't even remember what I had for breakfast.’ And somehow he remembered that course number [from] half a century ago, and I was like, ‘Wow’.

I understand that. Many of the conversations I had with those people, 30, 40, 50 years ago really have stuck with me, and not just shaped the direction that I went in, but still to this day, shape what I do on a day-to-day basis.

Well this was a brilliantly fun hour. I'm sure anybody listening to the show can tell I had the best time. I would love to invite you back. I'm only stopping because I hit an hour. I have all these other things I want to ask you about, all of it having to do with Cyc. I'd love you to come back.

Thank you. I'd love to do that, and in fact since you are local, let me invite you to come and see what Cyc is up to and what Cyc is capable of today.

I will be there in an hour. Thank you so much for your time.

Thank you, Byron, bye, bye.