Episode 103: A Conversation with Ben Goertzel

Byron Reese discusses AI with Ben Goertzel of SingularityNET, diving into the concepts of a master algorithm and AGI's.

:: ::

Guest

Dr. Ben Goertzel is the CEO of SingularityNET and the Director of Research of the Singularity Institute for Artificial Intelligence, natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, virtual worlds and gaming, and other areas. As leader of the OpenCog project, Goertzel aims to build an open-source general artificial intelligence engine. In 2001, he founded Novamente LLC [Nova Mente for “new mind”] that supplies software products and services leveraging artificial general intelligence technology.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, I'm Byron Reese. Today, my guest is Ben Goertzel. He is the CEO of SingularityNET, as well as the Chief Scientist over at Hanson Robotics. He holds a PhD in Mathematics from Temple University. And he's talking to us from Hong Kong right now where he lives. Welcome to the show Ben!

Ben Goertzel: Hey thanks for having me. I'm looking forward to our discussion.

The first question I always throw at people is: “What is intelligence?” And interestingly you have a definition of intelligence in your Wikipedia entry. That's a first, but why don't we just start with that: what is intelligence?

I actually spent a lot of time working on the mathematical formalization of a definition of intelligence early in my career and came up with something fairly crude which, to be honest, at this stage I'm no longer as enthused about as I was before. But I do think that that question opens up a lot of other interesting issues.

The way I came to think about intelligence early in my career was simply: achieving a broad variety of goals in a broad variety of environments. Or as I put it, the ability to achieve complex goals in complex environments. This tied in with what I later distinguish as AGI versus no AI. I introduced the whole notion of AGI and that term in 2004 or so. That has to do with an AGI being able to achieve a variety of different or complex goals in a variety of different types of scenarios, different than the narrow AIs that we have all around us that basically do one type of thing in one kind of context.

I still think that is a very valuable way to look at things, but I've drifted more into a systems theory perspective. I've been working with a guy named David (Weaver) Weinbaum who did a piece recently in the Free University of Brussels on the concept of open ended intelligence, which is more looking at intelligence, than just the process of exploration and information creation than those in the interaction with an environment. And in this open ended intelligence view, you're really looking at intelligent systems and complex organizing systems and the creation of goals to be pursued, is part of what an intelligence system does, but isn't necessarily the crux of it.

So I would say understanding what intelligence is, is an ongoing pursuit. And I think that's okay. Like in biology the goal is to define what life is in ‘the once and for all’ formal sense, before you can do biology or an art, the goal isn't to define what beauty is before you can proceed. These are sort of umbrella concepts which can then lead to a variety of different particular innovations and formalizations of what you do.

And yet I wonder, because you're right, biologists don't have a consensus definition for what life is or even death for that matter, you wonder at some level if maybe there's no such thing as life. I mean like maybe it isn't really… and so maybe you say that's not really even a thing.

Well, this is that one of my favorite quotes of all time [from] former President Bill Clinton which is, “That all depends on what the meaning of IS is.”

There you go. Well let me ask you a question about goals, which you just brought up. I guess when we're talking about machine intelligence or mechanical intelligence, let me ask point blank: is a compass’ goal to point to North? Or does it just happen to point to north? And if it isn't it's goal to point to North, what is the difference between what it does and what it wants to do?

The standard example used in resistance theory is the thermostat. The thermostat’s goal is to keep the temperature above a certain level and below a certain level or in a certain range and then in that sense the thermostat does have—you know it as a sensor, it has an actual mechanism that's a very local control system connecting the two. So from the outside, it's pretty hard not to call the thermostat a goal to a heating system, like a sensor or an actor and a decision making process in between.

Again the word "goal," it's a natural language concept that can be used for a lot of different things. I guess that some people have the idea that there are natural definitions of concepts that have profound and unique meaning. I sort of think that only exists in the mathematics domain where you say a definition of a real number is something natural and perfect because of the most beautiful theorems you can prove around it, but in the real world things are messy and there is room for different flavors of a concept.

I think from the view of the outside observer, the thermostat is pursuing a certain goal. And the compass may be also if you go down into the micro physics of it. On the other hand, an interesting point is that from its own point of view, the thermostat is not pursuing a goal, like the thermostat lacks a deliberative reflective model of itself either as a goal-achieving agent. To an outside observer, the thermostat is pursuing a goal.

Now for a human being, once you're beyond the age of six or nine months or something, you are pursuing your goal relative to the observer, that is yourself. But you're pursuing that goal—you have a sense of, and I think this gets at the crucial connection between reflection and meta thinking, self-observation and general intelligence because it's the fact that we represent within ourselves, the fact that we are pursuing some goals, this is what allows us to change and adapt the goals as we grow and learn in a broadly purposeful and meaningful way. Like if a thermostat breaks, it's not going to correct itself and go back to its original goal or something right? It's just going to break, and it doesn't even make a halting and flawed defense to understand what it's doing and why, like we humans do.

So we could say that something has a goal if there's some function which it’s systematically maximizing, in which case you can say of a heating or compass system that they do have a goal. You could say that it has a purpose if it is representing itself as the goal maximizing system and can manipulate its representation somehow. And that's a little bit different, and then also we get to the difference between narrow AIs and AGIs. I mean AlphaGo has a goal of winning at Go, but it doesn't know that Go is a game. It doesn't know what winning is in any broad sense. So if you gave it a version of Go with like a hexagonal board and three different players or something, it doesn't have the basis to adapt behaviors in this weird new context and like figure out what is the purpose of doing stuff in this weird new context because it's not representing itself in relation to the Go game and the reward function in the way the person playing Go does.

If I'm playing Go, I'm much worse than AlphaGo, I'm even worse than say my oldest son who's like a ‘one and done’ type of Go player. I'm way down on the hierarchy and I know that it's a game manipulating little stones on the board by analogy to human warfare. I know how to watch the game between two people and that winning is done by counting stones and so forth. So being able to conceptualize my goal as a Go player in the broader context of my interaction with the world is really helpful when things go crazy and the world changes and the original detailed goals didn't make any sense anymore, which has happened throughout my life as a human with astonishing regularity.

I think most of the time these sorts of definitional discussions are really meaningless like: is that really dessert, or is that really a vacation or is that really…? None of that matters. But I guess with things like goals and self and understanding, what we're talking about in the end is: ‘Can machines ever be moral agents and do machines have rights on the basis that they can experience pain? And so there are weighty issues that hang on these obscure questions. So I guess talking to somebody who runs…

That's the nature of human life. We have weighty issues hanging on things that we don't understand hardly anything about.

So from a guy that is the CEO of SingularityNET, and based on what you've written before, I assume that you believe at some point we can duplicate… that machines can have ‘selves’ and they can experience the world and they can achieve consciousness and they can be moral agents and all of the rest. Is that true? Do you believe that?

Yes I do believe that. And I also believe that by experimenting with both intelligent machines of various sorts and with modified human brains and brain computer interfaces, I think we're going to come at a much better understanding of these concepts that we're discussing than what we have now. In physics you have concepts like mass and velocity and acceleration and charge and they may not be the profound and ultimate definition, from a philosophy view. There may be weird conditions near a spinning black hole—where our common sense understanding of these definitions doesn't work anymore, but you can [have a] workaday concept in the domain where we normally live, for dealing with Newtonian physics and electromagnetic physics.

I don't think you necessarily have that workaday conceptual vocabulary for AGI systems yet, because I think all of our ways of thinking about mind and consciousness and goals in intelligences is a very over fitted to human beings. And as we extend to create many different types of intelligence—whether by building AGIs, by networking people’s brains together—I think we're entering areas where these legacy concepts are not really going to be that useful anymore. Same as in biology where they may have had a sort of workaday notion of what life is. Like a rock isn't very alive; a person is, a bacterium is and once you're doing synthetic biology and nanotechnology and you're engineering molecular machines, suddenly like the rough and ready approximate conceptualization of things is not only just philosophically incomplete, it's not helpful anymore.

So I think we're going to find all these words going around are not helpful as we really get into AGI and Deep GI and so forth, and where we're going to in the next couple of decades and working together with AGIs, we're going to come up with a lot of new concepts and theories that make a lot more sense. Some people would like it if we could come up with a perfect theoretical understanding of AGI with pencil and paper beforehand so that we can somehow control or predict exactly where AGI is going to go before we build it. But I’m sorry, that's not going to work… both for both practical and theoretical reasons.

But I'm really curious, because we have consciousness, so we experience the world. We don't measure temperature, we feel warmth, and that's called the last great scientific question we neither know how to pose scientifically nor do we know what the answer to that will look like. And it's the thing we're all most immediately aware of in our life, that we experience the world.

I posit that the belief we will build machines that experience the world is an article of faith based on a simple assumption that human brains are machines, and human minds are machines, and consciousness is a mechanistic process. But do we know that for a fact?

Well my perspective isn't mechanistic at all. I think that different people working on even the same AI systems have different views on these broader concepts. There are certainly some people involved with OpenCog who take more of a mechanistic view. I'm really more of a panpsychic where I believe consciousness is an attribute, that attaches to everything that exists… And then I'm also to be honest I'm a bit of a mystic in that I think that the space/time continuum we exist in is probably just one fragment of some much much larger realm of being.

I edited a book on psi phenomena: ESP, pre-cognition, reincarnation, the remote viewing instance. We could do a whole other podcast on some bizarre phenomena that is supported by data but don't fit in to standard materialistic and mechanistic views of the world. So I think whatever consciousness is—and I have my own ideas about that, it appears to be associated with the hunk of meat in our head, which we call our brains and with our body. And in that case, the question is: can consciousness be similarly associated with a bunch of wires on a chip or with a bunch of light from an optical processor or something?

One doesn't have to assume that consciousness is comprised of physical and mechanical operations to think that conscious can associate with an electrical machine that we would associate with… a biochemical machine. And I think we're going to be able to explore these issues in a much more empirical way going forward with brain computer interfacing. This is what I refer to as second person science. So, what if instead of talking to each other over a phone line, you and I could connect our brains together? It would be like an Ethernet 7.0 cable or Wi-Fi, right? So we we connect our brains together and then we increase the bandwidth so that connection would gradually move toward the condition of becoming like one brain because of that total connection between my brain and yours. So actually our brains are more and more richly connected, in which we would feel each other’s consciousness, as if we were becoming two halves of one brain.

And then [if] we decrease that with a bad connection, we would start to feel very separate and distinct again and this would be a second person knowing about each other. I might try that same experiment inside a brick. I might, by wiring my brain into the electro-dynamic inside a brick, I might feel what it is to be a brick, but my guess is it's going to feel much less interesting and complicated than what it means to be a human. And now suppose I wire my brain into the computer chip running an OpenCog AGI algorithm, what do I feel? Does it feel like a brick? Does it feel like a vast meaningless conglomeration of infinite sentience with no feeling or experience? Or do I sort of meld my consciousness in with the consciousness of that AGI system in the same way that happened when I wired my brain into the brain of another human? Then you can do this with more than one [person]. When we wire our brains together, we have a third observer wire their brain into the group as well, and then they could report: "Yes these guys are not lying when they say that they were melding their minds, experiencing each other's consciousness."

I think right now we're restricted to looking at data tables, reading out from laboratory instruments or else our own first person subjective experience, which should be communicated only through language or other indirect media like drawing, but with brain computer interfacing—brain-to-brain interfacing, that will bring conscious experience into the realm of science in a quite interesting and different way.

Our direct experiences of each other's consciousness will be auditable within a community of observers, which is really the essence of how scientific truth is arrived at. And this is going to lead us to a very different ontology of culture state and a different theory about consciousness that attaches to matter. And that now we're back in the position of people in the Middle Ages making theories about what’s in outer space without a spaceship or a telescope, looking at consciousness with the limited tools that we have now.

I'll wrap this up with: Do you believe we're going to get to a general intelligence or to that world you're talking about through a series of incremental improvements? Or is there some ‘Ta-Da’ breakthrough that one day that [will] just happen and we'll stumble on it? And regardless of your answer to that, when do you think we'll have that? Because when I ask people on my show that, I get ‘between five and 500 years,’ and so I'm kind of curious where in that broad choice of timing would you place your poker chip?

Yeah, about timing first of all, I think if it can be done with standard digital computers and big networks stuff, which I'm 90-95% sure of, then we're at five to 30 years off. We're not 500 years off. I mean there's a lot of computing power, there's a lot of smart people thinking about AI now. There's not as much focus on AGI as there could be, but that could change in five to 10 years or even one or two years. We're years to decades off, unless there's something really big we're overlooking and we need like a quantum byte of computer to do something and then it could be 50 years or seven years.

But even in that case—in the very unlikely case that you need some more brain like hardware/wet ware/ quantum ware infrastructure—even in that unlikely case, it's going to be this century. I mean we're building quantum computers at a breakneck speed now. And we're doing synthetic biology. So I think it's not 500 years off. People who say that are just denying the reality of exponential growth.

In terms of the hard take off or soft takeoff, I need to come up with better words for this, but I think it was more like a semi hard take off, right? So what I think is there is going to be gradual improvements for a while. And these are going to get us to the point where we have an AGI system that can do real math and computer science at—let's say the level of an undergraduate from a good university. And you know it's still going to be pretty dramatic in that period, getting up to that point.

So if you look now in certain fields of AI, like deep neural nets computer vision, natural language processing, you have new papers coming out every week which are something new and amazing and it’s mind-numbing even to keep up with all the new papers right now. I think in a few years we're going to have a situation regarding AGI where there is a new paper about AGI every week, new code about AGI every week and we're just going to feel the amazing progress toward AGI—the way that we're feeling it now in computer vision or in deep neural nets or natural language processing. That burst of activity can get us from an AGI that can read Russell/Norvig’s textbook on AI and you can download some R codes for a formal concept analysis and run it on the data set and compare it to a clustering algorithm.

Once we've found an AGI that can do math and can do science at the undergraduate level right, then the ground is there for a hard take off because… that system can absorb human knowledge, it can do math, computer science, it could do hardware design on the properties of materials, but it can also interface brain to brain with Mathematica, MATLAB and R. It can also copy itself and put itself on a thousand different computers and so forth. So at that stage, who knows how fast things could progress?

You could get a doubling of intelligence every month or even for a while every hour or something. So then but I think that point where you get the ‘hard take off’ is going to come from having an AGI that can fully absorb all of human culture’s knowledge on vast computer science and engineering. It's not going to come from some rogue self-improving process off in a box somewhere.

So tell me about SingularityNET. For the listeners who aren't familiar with it, tell that story and then kind of update us on where you are, and what we can look forward to you launching.

Absolutely. So let us plunge from the area around the philosophy of consciousness and brain computer interfacing to the software that we we are rolling out this month and are going to be during this year. So my real background and passion regarding AI is to create General Intelligence at the human level and beyond. I believe that very likely—although I can't be absolutely certain we're able to do that using computer programs or programming languages running on big distributed networks and the kinds of computers we have now. But it remains a difficult problem and it's also not going to be like an AGI running in a box studying its simulated virtual navel. It is going to be an AI system getting to the AGI first that is working together with the amazing amount of data and computing networks and narrow AI, and other software and hardware systems all around the planet.

Look we're building a global brain on the planet earth right now connecting together various computing and communication systems, other sources of data and medical sensors and we're going to first get to AGI by building the cognitive cortex for this global brain that can work together with all the other computing systems and work with other people on the planet and apply it to higher level digital generalizations and abstraction running on top of that. We do need to write algorithms for… lifelong learning and transferred learning. I've been working on that for a long time within my OpenCog project, which is an open source R&D project, you can find that online.

But I think these very powerful algorithms for learning and abstraction reflect an early part of the story. You need to be connecting these really powerful abstract algorithms with other AI tools that are doing more practical things, processing data from various sensors, analyzing the data set, helping people achieve various goals. So then you need the right sort of connection fabric and infrastructure for more abstraction-oriented AI agents to connect with more concrete nitty gritty specialized AI agents, so that the different AI agents from various levels of abstraction and with different focuses can connect together into what AI pioneer Marvin Minsky called ‘the society of mind.’ But I think of it more as an ‘economy of mind’, because AI agents must not only outsource work to each other and share data and, interact with each other, but must be able to rate each other for how well they did and pay each other for their work.

And these are not all living in a box somewhere. They're embedded in the world economy. They're using real compute resources and data resources and human resources. So they need to be part of the existing money economy for this for this purpose. So the SingularityNET project is something I launched with Simone Giacomelli and a bunch of others in 2017. And this is oriented towards providing a decentralized, blockchain based, democratically controlled framework and platform providing many different AIs, connected together with each other, sharing information with each other, outsourcing work to each other, rating each other, paying each other and organizing into a collective society of AI agents where an intelligence greater than the sum of the parts.

These agents (some of them) are doing very nitty gritty practical things like labeling what objects and events are in images, or answering people’s questions or analyzing biomedical data. Some of them are doing more abstract things like meta learning and reasoning about what modes of reasoning or learning work best in which context. Some of them are doing softer things, like trying to understand human emotion from language and voice, but all these different AI agents are connecting together in this common framework. And so we funded this and got this going in late 2017 via an IPO, a company generation event that we did in December back when the cryptocurrency was in a little more of a bull market than it is right now.

But the cryptocurrency aspect is really, mostly on the back end, so there is an efficient and customized way for AIs to compensate each other for for their work and we can use the cryptographic token that says, “reward and bonus for AIs people doing useful things in the network.” If I'm right, this can serve two purposes at once. It can be the way that AGI sort of crystallizes as many different AI agents with different characteristics and purposes, written by different people. And it can also be a way of steering the global AI community away from oligopolistic control by a few large companies and major government military/intelligence organizations, and ensuring that: no decrystallization of AGI is occurring out of the provision of benefit across all different countries, social classes and vertical markets; and the incorporation of AI algorithms and ideas from developers everywhere, not just those who happen to live somewhere that enables them to get a job for a big tech company or a government intelligence organization.

What do you hope happens out of all of that? What can you reasonably expect to happen?

Well we're creating the Singularity.Net. That's why it's called Singularity.Net. We're going to launch the technological singularity. But of course when you're looking at the microscopic level, year by year the singularity is small actions by human beings involved with a program that are existing right now. This comes out to a bunch of specific things. I mean we have three different network effects that we need to get going in order to make the SingularityNet vision work.

One of them is we need to get more AI developers to put their AI code into the Singularity.Net platform instead of just putting it on GitHub where it sits there waiting for somebody to figure it out and download it, we put it live into a container, running in the cloud so that anyone can access your AI. We need to get more and more developers to want to do this with their AI code, so we have like a living organism comprising the different AI agents and services surrounded by developers around the world.

Another thing is we need to get the different AI in this networks to talk to each other and share data with each other and outsource work to each other so that the whole can be greater than the sum of the parts; and this requires AI developers to think in a little bit of a different way than they have been. Just like in my lifetime, we made a shift from writing our own data structures and algorithms to accessing other people's data structures and algorithms from libraries, people need to finally make a shift to agent-based software design where they write AIs thinking that their AI is going to be consulting every AI agent on things as it goes about working.

So that's two different network effects we need to see. One is getting AI developers to actually use this platform to put this stuff live in the containers online wrapping our APIs. And the other is we need to get the AIs talking to each other, so each neural AI isn’t like a hermit living somewhere and processing data for its own benefit, so they're talking to each other they can synergize and you really get a mixed society of mind And then the third network effect we need to see to make this work is on the demand side. I mean we need to get users, which is companies large and small, independent developers who need AIs, we need to get them to be using the decentralized AI network instead of using Amazon's AI API, Google’s API, Tencent’s API.

Instead of going and getting subscriptions for a bunch of random websites, we need to get AI users to actually use this platform, and towards that end, one thing I've been involved in in the last few months is spinning off a separate company from the SingularityNET called the Singularity Studio. Now the Singularity.Net it's a decentralized AI network which is democratically operated like a digital biological organism, but we are seeding that network through a non-profit organization called the SingularityNET Foundation, using Ethereum blockchain as a decentralized democratically organized digital organism as well, but there is an Etherium Foundation in Switzerland, which governs the operation of that from a conventional legal and structural point of view, so that the SingularityNET Foundation that’s guiding the growth of the SingularityNET decentralized AI network is spinning off a for-profit company called Singularity Studio, which is its own corporate entity.

The goal of Singularity Studio is to design, create, build the model and monetize the enterprise software product which at the backend of it is a decentralized SingularityNet AI network. So, we're sort of playing with elements of AI, but the difference is, these other companies that are doing enterprise AI—software products, plus services they're back ending on, frankly their own proprietary AI code and telling the open source community when they can, whereas Singularity Studio is an AI product, back ended on this decentralized AI network from SingularityNet that we’re just launching at end of this month of February 2019, kicking off the network effects immediately after the launch of the beta version this February, and then doing the proper workshops to get more software developers using the platform.

We’re reaching out to companies I've worked with before, when I was being a corporate AI consultant to get more and more companies to use the platform, and then at the same time, we're kicking off the Singularity Studio spin off by means of the Singularity Studio product. This is breaking new ground in business and corporate structure and business models because we are trying to very tightly link together this decentralized open AI network, and this whole for-profit enterprise software company. You have analogies to that in the world now—you have Ethereum network and Ethereum Foundation and then ConsenSys, which is a company that does consulting on top of that, and of course you have Linux which is open source and then Redhat (which was just bought by IBM for $36 billion) that's making enterprise software on top of Linux.

We're creating more of a purposeful, well architected coupling between the decentralized network and this enterprise software company, so the two can grow together, again, with the dual goals of working as fast as possible toward general intelligence, and towards making sure this general intelligence is as democratically and participatory governed and regulated as possible. Because I think the latter is the way that we get to a beneficial AGI. The way we get to an AGI is to go through human beings, and that respects the rights and the beauty and the welfare of other sentient beings. It’s not by having an expert committee of ethics gurus working for the UN, Google, IBM and Tencent pontificating about what should be good for the world. The way you get to a beneficial AGI is by framing the whole beautiful loving mess of humanity into the process of building and sculpting it and using this AGI that comes about in a democratic manner.

Actually we're having a workshop on exactly this in Malta, at the Malta AI Blockchain Conference in May, on AI law and citizenship—thinking about not just how you build AGI, but how do you work AGI into the legal structures all around the world, so that when you ask me about SingularityNET which is owned by everyone and no one, and controlled in a democratic and participatory way by all the AI developers and users. AI network, play into the legal systems, but can an AI be a citizen, as the Sophi Robot and software team at Hanson Robotics was made a citizen of Saudi Arabia . Can an AI become a citizen too in Hong Kong or the US? Can the start modifying companies that it can find through some corporate bylaws? Can the Dow [Jones] as a self-autonomous organization register itself as a company?

We're working with the Maltese government's AI taskforce to resolve, or take the first steps to fleshing out these sorts of issues. So there's a lot of different pieces here, right? There's one with AI and consciousness and intelligence that we were discussing at the beginning of this call, and there's hardcore AI algorithms for learning how to learn and reflecting and reasoning, which we're working on in the Open Cog open-source AGI processes. There's connecting together many different AIs to form a collective intelligence that the SingularityNET platform enables, and how do you make money from this, for all the AI developers around the world, including the developing world who can't get a job at Google or Baidu, which the SingularityNET decentralized as a participatory mechanism…

How do you encourage this AI to be nice to people, as it becomes super-intelligent, and how do you work it into all the regulatory systems around on the planet? The hard part is, you've got to work out all these things together in an interlocking way pretty fast, because this could all come to an end in the next five-ten years, but not 500 years, so it's an exciting and dizzying time to be alive and involved in all these things. I'm hoping everyone listening to this is either involved in helping this, or will become involved with helping in this. This is part of why I'm so into open source and crypto and decentralization, because I think we need more brains, hearts, minds and souls, pushing on making AGI come out in a positive way, and we need to build tools that can leverage the contribution through as many different people as possible.

Okay, well that is a lot to think about, Ben. Why don't we leave it there. We're out of time, but I want to thank you for a fascinating show. We covered a lot of ground and I applaud what you're doing.

Well yeah, thanks a lot, this would take more than an hour to plumb the depths of all these issues perhaps, but once this network is larger and flourishing, then perhaps we will talk again and we'll see what things have come.

Let's do it.