Episode 85: A Conversation with Ilya Sutskever

In this episode, Byron and Ilya Sutskever of Open AI talk about the future of general intelligence and the ramifications of building a computer smarter than us.

:: ::

Guest

Ilya Sutskever is a computer scientist working in machine learning and currently serving as the Chief scientist of OpenAI. Sutskever obtained his B.Sc, M.Sc, and Ph.D in Computer Science from University of Toronto's Department of Computer Science under the supervision of Geoffrey Hinton.

After graduation in 2012, Sutskever spent two months as a postdoc with Andrew Ng at Stanford University. He then returned to University of Toronto and joined Hinton's new research company DNNResearch, a spinoff of Hinton's research group. Four months later, in March 2013, Google acquired DNNResearch and hired Sutskever as a research scientist at Google Brain. At Google Brain, Sutskever worked with Oriol Vinyals and Quoc Le to create the sequence to sequence learning algorithm. In 2015, Sutskever was named in MIT Technology Review's 35 Innovators Under 35. At the end of 2015 he left Google to become the director of newly founded OpenAI institute. Sutskever was the keynote speaker at NVIDIA NTECH 2018 and AI Frontiers Conference 2018.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm and I'm Byron Reese. Today my guest is Ilya Sutskever. He is the co-founder and the chief scientist at OpenAI, one of the most fascinating institutions on the face of this planet. Welcome to the show Ilya.

Ilya Sutskever: Great to be here.

Just to bring the listeners up to speed, talk a little bit about what OpenAI is, what its mission is, and kind of where it's at. Set the scene for us of what OpenAI does.

Great, for sure. The best way to describe OpenAI is this: so at OpenAI we take the long term view that eventually computers will become as smart or smarter than humans in every single way. We don't know when it's going to happen, some number of years, something [like] tens of years, it's unknown. And the goal of OpenAI is to make sure that when this does happen, when computers which are smarter than humans are built, when AGI is built, then its benefits will be widely distributed. We want it to be a beneficial event, and that's the goal of OpenAI.

And so we were founded three years ago, and since then we've been doing a lot of work in three different areas. We've done a lot of work in AI capabilities and over the past three years we've done a lot of work we are very proud of. Some of the notable highlights are: our Dota results where we had the first and very convincing demonstration of an agent playing a real time strategy game, trained the reinforcement learning with no human data. We've trained robots to record, robot hands to re-orientate the block. This was really cool, it was cool to see it transfer.

And recently we've released the GPT-2 a very large language model which can generate very realistic text as well as solve lots of different energy problems [with] a very high level of accuracy. And so this has been our working capabilities.

Another thrust to the work that we are doing is AI safety, which at [its] core is the problem of finding ways of communicating a very complicated reward function to an agent so that the agent that we build, can achieve goals and great competence. It will do so while taking human values and preferences into account. And so we've done some significant amount of work there as well.

And the third line of work we're doing is AI policy, where we basically have a number of really good people thinking hard about what kind of policies should be designed and how should governments and other institutions respond to the fact that AI is improving pretty rapidly. But overall our goal, eventually the end game of the field, is that AGI will be built. The goal of OpenAI is to make sure that the development of AGI will be a positive event and that its benefits are widely distributed.

So 99.9% of all the money that goes into AI is working on specific narrow AI projects. I tried to get an idea of how many people are actually working on AGI and I find that to be an incredibly tiny number. There's you guys, maybe you would say Carnegie Mellon, maybe Google, there's a handful, but is my sense of that wrong? Or do you think there are lots of groups of people who are actually explicitly trying to build a general intelligence?

So explicitly. OK, a great question. So it's an explicitly... most people, most research labs are indeed not having this as their goal, but I think that many people, the work of many people indirectly contributes to this. Where for example the fact is that much better learning algorithms, better network architecture, better optimization methods, all tools which are classically categorized as conventional machine learning, they also are likely to be directly contributing to those...

Well let's stop there for a second, because I noticed you changed your word there to "likely." Do you still think it's an open question whether narrow AI, whatever technologies we have that do that, is it an open question whether that has anything to do with general intelligence, or is it still the case that a general intelligence might have absolutely nothing to do with that propagation, neural nets and machine learning?

So I think it's very highly unlikely. Sorry. I want to make it clear, I think that the tools, that is the field of machine learning that is developing today, such as deep networks, backpropagation, I think those are immensely powerful tools, and I think that it is likely that they will stay with us, with the field, for a long time all the way until we build very true general intelligence. At the same time I also believe, I want to emphasize that, important missing pieces exist and we haven't figured out everything. But I think that the deep learning has proven itself to be so versatile and so powerful and it's basically been exceeding our expectations in every turn. And so for these reasons I feel that deep learning is going to stay with us.

Well let's talk about that though, because one could summarize the techniques we have right now as: let's take a lot of data about the past, let's look for patterns in that data and let's make predictions about the future, which isn't all that exciting when you say it like that. It's just that we've gotten very good at it.

But why do you believe that method is the solution to things like creativity, intuition, emotion and all of these kind of human abilities? It seems to be at an intuitive level that if you want to teach a machine to play Dota or Go or whatever, yeah that works great. But really when you come down to human level intelligence with its versatility, with transferred learning with all the things we do effortlessly, it's not even... it doesn't seem at first glance to be a match. So why do you suspect that it is?

Well I mean I can tell you how I look at it. So for example you mentioned intuition is one thing which - so you used the certain phrase to describe the current tools where you kind of look for patterns in the past data and you use that to make predictions about the future and therefore it sounds not exciting. But I don't know if I'd agree with that statement. And on the question of intuition, I can tell you a story about about AlphaGo. So... if you look at how AlphaGo works, there is a convolutional neural network.

OK actually let me give you a better analogy - so I believe there is a book by Malcolm Gladwell where he talks about experts, and one of the things that he has to say about experts is that an expert as a result of all their practice. They can look at a very complicated situation and then they can instantly tell like the three most important things in this situation. And then they think really hard about which of those things is really important. And apparently the same thing happens with Go players, where a Go player might look at the board and then instantly see the most important moves and then do a little bit of thinking about those moves. And like I said, instantly seeing what those moves are, this is their intuition. And so I think that it's basically unquestionable with the neural network that's inside AlphaGo calculates a  solution very well. So I think I think it's not correct to say that intuition cannot be captured... and so actually I'll let you speak a bit more.

So when I think of like Harry Potter or Lin-Manuel Miranda’s Hamilton and I want you to imagine the day a computer could write something like that, I wouldn't immediately say, ”Well why don't I just study everything that's ever been written before and use that to somehow make projections of what might be good in the future?” If I think of Banksy, the graffiti artist or something, I don't know that I would make a digital Banksy with machine learning.

Well I do agree with you that today, at least right now, we don't know how to do super amazing creativity like the one you describe, although it's also important to keep some of the good examples you have in mind, if you like, the ability of computers to generate images and text, these are GPT-2 and cartoon images and it is getting better.

You are right that right now it looks like we are not generating anything close to real truly amazing art. But it's also true that real artists must study the work of past masters very carefully and really internalize all the tools and techniques before they get to a point where they can really add their innovative twist. It's not like they just go from zero to 60, but I also agree with you that there is an important capability that humans have that our [AI] systems still do not, that we haven't discovered. Well what I do believe quite strongly is that the deep learning tools that we have are truly deeply powerful. And I think it is likely that these tools or some form of these tools are basically going to stay in the field until the end, until general Intelligence is built.

Fair enough. So in 1956 when a group of people convened at Dartmouth to solve AI, they thought they could do it in a summer. And that assumption was based on the idea that intelligence might have just a few very simple laws like Newton's laws or Maxwell's laws, that it might be very simple.

Also if you look at DNA you know the amount of DNA you have that you share with a chimp: 98.9%. So if DNA has 600 mbps of data and the amount that you're different than a chimp may just be a few hundred K, and yet it's that difference which... that isn't general intelligence, but that small difference is what can produce general intelligence.

So I'm wondering: is there any chance that general intelligence is actually quite a bit simpler than the complexity of the ever more complex systems that we're building? Is it impossible it's just a really simple thing?

So I think the answer is yes. I also want to say two things: first of all I think the answer is yes, it is possibly that simple. I want to object to the statement that we are building ever more complex systems. I think that in theory that in some settings, in some situations the engineering and infrastructure around the system can be quite complex. For example when I think about all the engineering that went into our Dotabot, or into our robot hand, the engineering there was quite complex, even though the machine learning core was simple.

When you want to solve a really hard problem of great complexity and a huge number of details and special cases, it's not possible to program all that. We're not that good programmers. We have to use a fairly simple machine learning system and then point it very correctly at the problem. So I if I were to look at many of the most interesting systems that exist today, they're all very simple.

I also want to comment about that Dartmouth conference where people were very optimistic about AI. I actually had a cool realization about it recently. One interesting question is how come that in the old days the symbolic AI completely overshadowed the machine learning stuff, even though if you look at the ideas that people were discussing in Dartmouth, they were talking about machine learning, and apparently Marvin Minsky himself who later became a big symbolic AI researcher, started from learning neural networks.

I think what happened is that in the ‘50s, their computers were really tiny and they were so tiny that you couldn't learn anything interesting at all. But you could do cool demos of symbolic AI like you could build a program that could prove little theorems about the geometry and to solve some calculus problems. And people looked at that, they said “Wow we can do this after like a year of work! For sure we could do everything.”

The only thing that changed now is that eventually our computers are faster and so the same slight variance of old learning ideas are starting to work really well. But broadly speaking, I think that it is likely that once we understand how general intelligence works, we can find the solution to be simple and it's almost always like this in science. It's always easy in hindsight. It's always very difficult to understand how could people ever not see it? Like for example, today we understand deep learning really well and we understand that if we train a big neural network on a big data set, of course it's going to solve the problem. Of course it can achieve very high performance. Of course it can solve very deep problems that are even very difficult, and it's very hard to understand how we ever thought otherwise.

So I agree that it's simple. I think that many of the cool systems that exist today are simple and I think that also includes our Dotabot for example or the GPT-2 language model which we released. They're both very simple systems conceptually.

So humans are our sole example of a general intelligence. We have these brains that we don't really understand, like thoughts aren't stored in your brain the way that they're stored on a hard drive. And the brain gives rise to a mind, which are all of these things that seem a little mysterious, like a sense of humor. You don't think your liver has a sense of humor but your brain does.

So we have this mind we don't really understand, and we have consciousness which is that we can experience the world. We can sense warmth, we don't just measure temperature. And so we have these capabilities [of] this mysterious brain, this is emergent mind and this odd consciousness. And I think the sole reason that people say we can definitely build an AGI, I think the only reason is that they say “Look we don't know how we do it, but we know it's got to be mechanistic, it's got to be mechanistic. We're just machines... we are simply machines and therefore someday we'll build a mechanical human or mechanical mind.

So my question to you is - and that's why I think there's so much disagreement on when we'll get it, why Elon Musk says five years and Andrew Ng says five hundred years, it's because nobody knows how to build it. We just assume that if we're machines we can build it, so I want to ask you: are there any data points that suggest we can build an AGI, other than that philosophical assumption that if we are machines we can build a machine to duplicate what we have? Do we have any data that suggests we can build one?

So I think this is a very good question also. This is an excellent question. I feel like for sure no one knows how long it's going to take to build AGI because we don't know what are the nature of the problems [we’re] going to face. On the question of why we think we can build it at all, I agree with you it fundamentally boils down to the human brain is made of matter, matter obeys certain rules. You got particles jumping around; you can study how these particles are arranged, and you can try to understand how that system works; and people did that and the result was neuroscience.

They discovered the neurons and the discovery of the neurons led to the idea that maybe we could have artificial neurons which could also do learning. And then these ideas... you know there was a chain of research all the way from the late 19th century to today:  first the neuron was discovered by I believe Ramon y Cajal and then McCulloch and Pitts, they thought that this is a really cool idea and we can create an artificial model of neurons and then Rosenblatt came up with the perception, that he said OK not all the neurons are equal but we can have a learning room and the problem, the way I see it, is that you can't have a mathematical proof like a proof where you say, like in the same way that two plus two is four, I know that we could build AGI at some point, but I think we could make good arguments.

The argument which you presented is good. That's number one and number two is the fact that you have computers and the success that we have with neural networks so far. I think this is an additional point that you can build AGI if you think about deep learning back propagation is the same learning algorithm, but it's used in so many different domains. It's used to do vision, it's used to do speech, it's used to play games and it's used to do language, used to generate texts and images. So this suggests some very deep generality. So this is an additional data point.

I think a third data point which suggests that AGI should actually be buildable, is the fact that the human brain has very clear computational properties. So for example, if a person damages their brain in a particular brain region, they lose a certain capability. There are examples there are known brain regions where if a person damages that brain region, they'll lose their ability to generate coherent language. There's a different brain region where if a person damages the brain there, they lose their ability to understand language but they don't lose their ability to generate language, so they can say what they want and they can say how they feel and they can say what they think, but if you talk to them, they won't be able to understand it.

So I think this class of evidence, the mechanistic nature of the brain, the direct evidence for that in terms of brain damage is a philosophical notion that the brain is made of particles, we can study how the particles are arranged and the progress we've had with AI so far, and in particular with deep learning and the fact that it's all driven by one essentially like a very very small set of ideas.

But of course even if all of that were true, none of that suggests that we can duplicate it in silicone. I mean none of it suggests that our computer paradigm that we have now can achieve it. Is that correct or not?

No. I think this evidence does suggest that you could replicate it in silicon because the fact that the brain is made out of particles suggests that... if let's say we give up on everything; we give up on any innovation and talk fully philosophically in the limit, you could imagine a very very large computer a data center of the size of the country let's say, which simulates all the molecules in the brain of a human. And that should be indistinguishable from a human in terms of information processing. So this is an existence proof that computers can build general intelligence.

Then I think the progress that we make with deep learning, it's very clearly done on computers. And it suggests further that you have this one very narrow class of methods, very simple small set of tools which can solve a very broad range of problems, which is constantly expanding. I think those two arguments suggest that in fact it should be replicable in silicon.

But the minute you do make a machine that can experience the world, and consciousness may be a requisite for intelligence, if it can experience the world, i.e can feel pain, then it kind of instantly has rights and then all of a sudden you can't in good conscience program it to clean your house. Is that right?

So I think this is an excellent question. This is an excellent question. And as computers get smarter we will need to grapple with those questions a lot  So I feel like our thinking on these questions is still very preliminary. I believe there will be a way around the particular problem that you're describing, where we will be... One of the advantages of building those intelligence systems is that we will have control on their deepest drives, you have some control for sure. And so it is conceivable that you could build a system which is fully intelligent, fully aware, very smart, but its deepest drive is to clean your house and basically make sure that all your needs are met, and it wants to do that. And it becomes unhappy if you don't like it. So you could build a system of this nature and then there would be a question do we want to build those systems. But at least technically there is no reason why such systems couldn't be built.

But that would be akin, wouldn't it be, to brainwashing a human to want to clean your house and then saying, "Well he wants to clean my house. I have to let him."

I don't think it's the same. It's more like I think instead of calling it "brainwashing," I would say a lot of our drives are given to us by evolution. So the reason we want to eat certain foods and the reason we want to do certain things, some of them are coming from culture and some of them come from evolution. But it is undeniable that evolution has given us a lot of drives, and evolution could have given us a different set of drives and different and other animals a very different set of drives.

So for example there are animals who have drives to dig a burrow in the ground and to hide there, and they really want to do that. So do we feel that it's wrong that those animals do that? Do you see what I mean? And what I'm talking about, it would be more similar to hard coding a drive the way evolution hard codes an animal.

So our goal is to build a generalized unsupervised learner. Is that correct? Something we can just point at the Web and it can learn everything and understand how everything interacts with everything else. (A) Is that a goal? and (B) How likely is it we'll do that and will that be some huge breakthrough?  I mean when they programmed Ultron and plugged them into the internet, 15 minutes later he decided to kill the world. So you have to think ‘well, what do we want to build? Is that what we want: an unsupervised learner?’

So I feel like your question has several parts, and I want to touch on the different parts. On the question of ‘Do we want to build a general unsupervised learner?’ I mean my response here is that unsupervised learning is clearly important, it's clearly useful as our work on them, as our recent language model work shows, it's starting to work and show signs of life. For sure, future intelligence systems and AGI in particular will make heavy use of unsupervised learning.

And then as to your comment of deciding to... you program a powerful system and then it decides to do something very harmful 15 minutes later, well that, this is why OpenAI invests so heavily in AI safety today to make sure that such a thing will not happen in however many years or decades when we do get to systems with this level of intelligence.

So I guess we're coming up on the end of our time. I'll ask one more question which is: there is an idea that there's something called a super intelligence, so the idea is you build a machine that has an IQ of 100, and then it has an IQ of a thousand, then it has an IQ ten thousand that it has an IQ of 10 million. And what in the world would something with an IQ of 10 million want, need, care about or whatever? Do you believe in the notion of the super intelligence, or is that just like one of those science fiction/thought experiments.

So my take, my view is that there is nothing special about the level of intelligence of humans and in fact I think not only... I think that the notion of super intelligence is plausible.  Super intelligent entities, entities which are much smarter than humans, already exist today, and these entities are called corporations. If you think of Google as a single entity, and in some sense it is a single entity, it's intelligence and it's capabilities are far greater than the capabilities of any single human.

And so, there is no reason why in the far future, many years or decades away, when computers become smarter than humans, there is no reason why a big network of computers would not be much smarter than humans also. I think it's a pretty uncontroversial statement just like the corporation is already a super intelligence, then in the far future when computers are smarter than humans, a network of computers would be a super intelligence in every sense of the word.

All right, well let's leave it there. How can people keep up with you and how can they keep up with what OpenAI is doing? Rattle us off some Twitter handles and URLs and all that.

For sure. So basically if you go to Google and type "OpenAI," you'll find our website and we have a blog and Twitter handle also called "OpenAI," and so all our work is... all the cool work we do can be found on Twitter and on our blog and in fact we work really hard to make our blog as successful as possible, so that you can just read the first paragraph and you can get 80% of what's going on.

All right, thank you for being on the show.

Thank you very much. It's been a pleasure.