Episode 93: A Conversation with Pieter Abbeel

In this episode Byron speaks with Berkeley Robotic Learning Lab Director Pieter Abbeel about the nature of AI, the problems with creating intelligence and the forward trajectory of AI research.

:: ::

Guest

Professor Pieter Abbeel is Director of the Berkeley Robot Learning Lab and Co-Director of the Berkeley Artificial Intelligence (BAIR) Lab. Abbeel’s research strives to build ever more intelligent systems, which has his lab push the frontiers of deep reinforcement learning, deep imitation learning, deep unsupervised learning, transfer learning, meta-learning, and learning to learn, as well as study the influence of AI on society. His lab also investigates how AI could advance other science and engineering disciplines. Abbeel's Intro to AI class has been taken by over 100K students through edX, and his Deep RL and Deep Unsupervised Learning materials are standard references for AI researchers. Abbeel has founded three companies: Gradescope (AI to help teachers with grading homework and exams), Covariant (AI for robotic automation of warehouses and factories), and Berkeley Open Arms (low-cost, highly capable 7-dof robot arms), advises many AI and robotics start-ups, and is a frequently sought after speaker worldwide for C-suite sessions on AI future and strategy. Abbeel has received many awards and honors, including the PECASE, NSF-CAREER, ONR-YIP, Darpa-YFA, TR35. His work is frequently featured in the press, including the New York Times, Wall Street Journal, BBC, Rolling Stone, Wired, and Tech Review.

Transcript

Byron Reese: This is voices in AI brought to you by GigaOm, I'm Byron Reese. Today I'm super excited we have Pieter Abbeel. He is a professor at UC Berkeley. He's the president, founder and chief scientist the Covariant.ai. He is the founder of Gradescope. He holds an undergraduate degree in electrical engineering and an MS degree from Stanford, a PhD from Stanford in computer science and probably a whole lot more. This is gonna be an exciting half hour. Welcome to the show, Pieter.

Pieter Abbeel: Thanks for having me Byron.

There are all these concepts like life and death and intelligence that we don't really have consensus definitions for. Why can't we come up with a definition on what intelligence is?

Yeah it's a good question. I feel like traditionally we think of intelligence as the things that computers can't do yet. And then all of a sudden when we manage to do it, we understand how works and we don't think of it as that intelligent anymore, right? It used to be okay, if we can make a computer play checkers that would make it intelligent, and then later we're like, ‘Wait, that's not enough to be intelligent, and we keep moving the bar,’ which is good to challenge ourselves, but yeah it's hard to put something very precise on it.

Maybe the way I tend to think of it is that there are a few properties you really want to be true for something to be intelligent, and maybe the main one is the ability to adapt to new environments and achieve something meaningful in new environments that the system has never been in.

So I'm still really interested in this question of why we can't define it. Maybe... you don't have any thoughts on it, but my first reaction would be: if there's a term you can't define, maybe whatever it is doesn't actually exist. It doesn't exist; there's no such thing, and that's why you can't define it. Is it possible that there's no such thing as intelligence? Is it a useful concept in any way?

So I definitely think it's a useful concept. I mean we definitely have certain metrics related to it that matter. I mean, if we think about it as like absolute, is it intelligent or not? Then it's very hard. But I think we do have an understanding of what makes something more intelligent versus less intelligent. Even though we might not call this is an intelligence because it can play checkers, it's still more intelligent when it's able to play checkers than when it's not. It's still more intelligent, if let's say, it can navigate an unknown building and find something in that building, than when it cannot. It's more intelligent if it can acquire maybe the skill to play a new game it's never seen before—you just present it with the rules and then figures out on its own how to play well. Which is essentially done by AlphaGo Zero, right? It was given the rules of the game but then just played itself to figure out how to play it maximally well. And so I think all of those things can definitely be seen as more intelligent if you can do them, than if you cannot do them.

So we have of course, narrow intelligence to use this construct which is an AI that we train to do one thing, and right now a technique we're using that we're having some success in is machine learning, a method which philosophically says “Let's take data about the past and project it into the future.”

And then there's this idea of a general intelligence which is somebody as versatile as you and [me], it's what we see in the movies. Is it possible those two technologies have nothing in common at all? They share no code whatsoever, because there's a vague sense that we get better and better at narrow and then it gets a little less narrow then you know it's AlphaGo, then it's AlphaGo Zero, then it's AlphaGo Zero Plus and eventually it's how? But is it possible they aren't even related at all?

That's a good question. I think, the thing about more specialized systems whether it's in let's say learning to play games or a robot learning to manipulate objects, which we do a lot of at Berkeley. I think often what we can get to succeed today tends to be somewhat narrow. And if a neural net was trained to play Go, that's what it does; if it was trained to stack Lego blocks, that's what it does. But I think at the same time, the techniques we tend to work on and by we I mean not just me and my students, but the entire community, we try to work on techniques where we have a sense that it would be more generally applicable than the domain we're currently being able to achieve success in.

So for example we look at reinforcement learning and the underlying principles. We could look at individual successes which is where a neural net was trained through reinforcement learning for a very specific success, and of course neural nets are very specific to those domains, whereas games or robotics or another domain and within those domains [are] very specific and like the game of Go or Lego blocks stacking or peg insertion and so forth.

But I think the beauty still is that these ideas are quite general in that the same algorithm can then be run again, and the same algorithm can be run again to have a robot learn to maybe clean up a table, and so I think there is a level of generality ‘under the hood’ that's doing the training of these neural nets even if the resulting neural net often ends up being a little specialized.

However you know I just heard an interview you gave where you were talking about the case that if you gave a narrow AI a bunch of data about planetary motion, it could predict the next eclipse and the next one and the next million. But if all of a sudden a new moon appeared around Jupiter and you said “What's that going to do to planetary motion?” it wouldn’t know because all it can do is take data about the past, make predictions about the future, and it isn't that simple idea: take data about the past, make projections about the future not really the essence of what intelligence really is about?

Yeah. So what you're getting at here is, to be fair, it's not something that humans figured out very easily either. I mean it's only when Newton came about [that] we started as humanity to understand that there is this thing called gravity and it has laws and it governs how planets and stars and so forth move around in space. And so it's one of those things where, definitely right now I suspect if we just gave a massive neural network (without putting any prior information in there about what we already learned about how the world works), a bunch of data about planetary motion, it's not very likely it would discover that.

I think it's not unreasonable that that's hard to do because I mean humans didn't discover it ‘til very late either in terms of time of our civilization and it took a very kind of exceptional person at that time to figure it out. But I do think that those are the kind of things that are good motivators for the work we do because since sometimes what it points out to is, something that's called Occam's Razor, which says that the simplest explanation of the data is often the one that will generalize the best. Of course ‘define simple’ is not easy to do, but there is a general notion that the [fewer] equations you might need, the [fewer] variables might be involved, the simpler the explanation and so the more likely it would generalize to new situations.

And so I think laws of physics are kind of extreme. A really nice example of coming up with very very simple, low dimensional description of a very large range of phenomenons. Then yes. I don't think neural nets have done that yet. I mean of course there's work going in that direction, but often people will build in the assumptions and say “Oh it does better when it has the assumptions built in.” That's not a bad thing to solve one problem but it's not necessarily the way you have intelligence emerge in the sense that we might want it to emerge.

So we are intelligent and we have these brains that we don't understand and you probably are familiar with the nematode worm. This worm that's got this 302 neuron brain and people in the Openworm project have spent 20 years trying to model those 302 neurons and produce the intelligence of the nematode worm. And we're not even sure if that's possible, so we have these brains and then we have minds, we have these emergent qualities of the brain that our other organs don't seem to exhibit. Your liver doesn't have a sense of humor, your stomach doesn't have emotions, and we don't understand what that is.

And then we have consciousness—we experience the world, we feel warmth, we don't just measure temperature, and a general intelligence you could say might have to have all of that. And we don't know how we do it. And it seems to me that the only argument is that well, we do it, and we're just machines and therefore a machine can do it. Do you know any other arguments that suggest we can build an AGI other than that simple bit of logic? We can do it, we are machines, therefore machines can do it?

Well that's the argument I tend to use—which is a brain is storage and compute. And of course the sensory inputs can affect the world. And so if a machine has enough storage and compute and has sensory inputs to see the world and has some kind of output to affect the world, then in principle it should be able to be intelligent, if somehow that storage and compute is set up the right way.

Right. So to be clear, if somebody said “Can you make an argument that a general intelligence... is it possible without resorting to spiritualism?” Could you make such an argument? It's a quantum effect, like Penrose says, or it's strong emergence, and so we're not going to be able to build it. Are there other scientifically grounded theories of our intelligence that would preclude machines from being intelligent?

Personally, I don't see any reason why it wouldn't be possible to build it in an artificial system.

So if that were true, you know I think consciousness, the idea that we experience the world—they say is the last great scientific question [that] we don't know how to ask scientifically and we don't actually know what the answer would look like. It's hard to imagine a whiteboard equation that explains why we can experience warmth as opposed to measuring temperature. So does that not give you any pause that we have this very unique capability, seemingly unique, that we don't understand, that science has a hard time grasping its head around and yet we're confident we'll be able to build it someday?

I think trying to figure that out of course, there's something to be figured out. I'm not saying we can just easily build it today, and I'm not saying we can easily explain this aspect easily today, but I think just going back to the first principles of okay, what do we need to have intelligence, sentience, storage, compute, of course the right program that runs on that compute and storage and sensors and actuators? And then of course many parts are not that well understood yet, but you know that's true in so many disciplines that we don't understand everything ahead of time. That's why you do a lot of research and then you make progress.

The last question I'll ask you along these lines [is] the range of guesses on when we'll get a general intelligence. My own range of people on the show are 5 to 500 years,where probably if there's a median, it's 30 years out. Interestingly it always seems to be 30 years out. I mean it was probably 30 years out in the ‘90s. Where would you guess?

Yeah, I don't have a very precise guess on that I think.

But isn't that interesting because if I said “When will we get to Mars?” Like there's not a 500 year gap between [the opposing estimates of] when people think we'll get to Mars, and it may say when will we do these other things? So isn't it like when you ask if I dropped my clothes off at my dry cleaners and say, “When will these be ready?” and they say “5 to 500 days.” I'd get a new dry cleaner. So why is that even an acceptable range of answers? And then what would be your answer?

So I think the reason it's acceptable is because essentially what we've seen is that the way artificial intelligence progresses is by having some kind of a combination of fundamental new insights that can change how we do things and then added on to a lot of incremental improvements on that, that when a lot of people make incremental improvements, all of a sudden it also adds up to pretty significant advances. But it's not very clear how much is missing.

It's clear we're making a lot of progress, but it's just not clear what the gap is. It's also not clear to which extent the gap is a conceptual breakthrough sequence of things that we need to go through to get there, then well, that we are fundamentally limited by other things like, let's say, compute. For sure in the past we were... I mean if you look at some of the early deep learning work, not called deep learning then, but ‘50s people had neural nets, ‘60s people had neural nets, in the ‘80s was a lot of work on neural nets. But the results were not nearly as good.

And you might want to know why were the results not so good? Were people not smart? People were plenty smart, it was just if you wanted to run an experiment, it would take so long, that takes a very long time to finally get the results of your experiment. You could analyze it, get to draw a conclusion, get to set up your next experiment, the cycle was just very, very long. Makes it very hard to make fast progress and so recently essentially we've gotten the compute that's good enough to make fast progress on a certain range of problems like image recognition, we can make fast progress on because the amount of compute we have matches up well with what's needed to make the conceptual breakthroughs. Same for language modeling, to some extent also true for reinforcement learning, imitation learning and so forth.

But there might be missing pieces if you look at where the research is headed. A lot of it is headed towards learning to learn or meta learning or transfer learning, kind of three versions of roughly the same thing. And there you have to scale up drastically the amount of compute that's going to be needed to get the job done because instead of trying to master one task, you're all of a sudden trying to master a family of tasks in one learning session, and the larger the family of tasks, the better you expect the results to be and so in that sense... and look at all those results for now they're very narrow families of tasks.

And I think at some point we can really scale up to very wide families of tasks that will lead to very significant breakthroughs and new capabilities that we don't expect—quickly learning new skills in new environments. But it's very unclear how wide a family of tasks would need to be, how much compute would be required for them to run learning on all of that. And so I think that's where a lot of the uncertainty comes from: that one, it's not clear how many more breakthroughs and two, it's not clear for some of the breakthroughs that we're going to need later, how much compute are we going to need to have a meaningful research cycle to make consistent progress? That's where the range comes from in my mind.

But even if that were the case, we don't know how much compute we're going to need to do this. Even if we got to within two orders of magnitude, you still get it down to a couple of decades. Right?

It all depends. I mean we've had Moore's Law doubling compute every three years, but some people debate that that will continue. There are fundamental limitations that can come up like when people tend to ask me “Is the algorithm going to keep accelerating?” Right? Because that's kind of what this question is about. We've seen so many surprising breakthroughs in the past 5 to 10 years. Everything moved faster than expected, right? And the question is “Will it keep moving faster than we expected?” And then maybe end up with a five, I mean five years seems really early to me.

I'm not betting on five years, but you would end up with earlier timelines. You think it's only going to go even faster because more people in the field, more ideas, more things being run and I think near term, that's actually gonna be true. It's only going to move faster because more people are joining the field, better training is available on all kinds of online courses and so forth.

But the underlying engine for this is often having more compute and, more data. And then we need to look at okay how long are we going to have more data? That seems no problem, in many ways but how long to keep having more and more compute, and that's where things become a lot more uncertain.

Yeah I mean I think the soon estimates stem from a belief of how close we are to a generalized unsupervised learner. The degree to which the original Dartmouth assumptions from 1956 are true that intelligence is like Newtonian physics—it's a few simple laws, so we're going to find them and then they're going to iterate and Boom! it's all going to happen. How complicated things like creativity really are. But but I hear you.

So let me ask you a whole different tack. I'm an optimist[as] anybody who listens to this show or reads my writing [knows], I believe in the power of technology to increase human productivity and I think good always comes from that. And when you hear about AI, you get all of these various scenarios. You get “Well with the power of the AI, governments can now listen to every conversation and they can follow every person, they can recognize every face and they can model everything and we're at an end of privacy.” Or you get “wow AI is going to empower people to be so productive, very few people are going to need to work and we'll have a world of abundance like Star Trek” and then you get “AI's gonna be so powerful that it eliminates every job, not just simple ones but every one: doctor, lawyer, architect, politician, preacher, everything.” And then you get “AI is going to fight wars, you know we're gonna have killer robots” and so there are all of these competing narratives for what's going to happen.

And I'm curious when you look in your crystal ball (nothing about general intelligence just what we know how to do now), what keeps you up at night and what keeps you going and doing all the work you're doing?

Yeah I think there's a lot of positives and a lot of potential negatives that we need to make sure to avoid. And so I think... let me start with some of the positives. So I think the notion that people get into car accidents, making that something of the past, that to me is very powerful. There's no need for people to get into car accidents once we build really good self-driving cars. Even more exciting to me in a direction we're starting to look out quite a bit now in my lab also at Berkeley is the notion that other scientific disciplines can be really empowered by AI.

So of course, robotics is powered by AI and is part of AI discipline and will ensure that we get access to physical goods more readily than we could otherwise and so forth and things should be cheaper and [we’ll] get it the next minute instead of the next hour and so forth. All these things are exciting but I think also there is this whole other area which is where AI can help other scientific disciplines.

So to make this very concrete (and but I think is gonna go well beyond those things that make it very concrete), I just worked on a project on AI that helps with electric circuit design. Worked on a project that AI helps database query processing, and now working on a project where AI will essentially help with essentially looking at semi-supervised learning for biological problems. So there's a lot of opportunity there for AI to not just work in the traditional disciplines that we think about as AI replaces humans, it can see now, it can talk and it can move, but also doing a lot of the science breakthroughs in principle or doing it together with people, empowering them to ask higher level questions and have the AI let loose on higher level questions, then it fills in the details.

And so I see a lot of opportunity there that will really affect society in a great way. But I also agree with all the negatives that you bring up, in the sense that I mean we should be very aware of them and we should be careful about them. And it's one of those things where I think at a high level there [are] a few categories. There is the negative where people just overlook it, they just make a mistake. But you know ask the AI to do something they thought was a good thing to ask but it's actually not a good thing, and AI is really good at doing what it's tasked to do and then over optimizes versus the paperclip production example. But it might occur in smaller situations too, but then also I think even more, or just as worrying, is just it gives a great amount of power.

So the paperclip example is one where you ask the AI for something you think is gonna help you with but you're actually asking for the wrong thing. But I think just just as boring if not more, is that AI can just give a lot of power to people more so than a person on their own can ever have without those tools available.

And so as you alluded to—things like listening to every conversation and making sense of it. That's a pretty powerful thing that needs to be wielded with care, saying for the ability to track people everywhere and so forth. Those are kind of things that are contributing unprecedented capabilities than you know. I think as a society we need to be really thoughtful about how we go about getting the benefits but not the downsides of this very powerful technology.

So let's switch gears. I'm curious to hear about your work. Can you talk a little bit about what you're doing at Covariant.ai or some of the work you're doing at the University?

Absolutely. So right now I have two main hats: I'm professor at Berkeley and also founder of Covariant.ai. So things I'm most excited about at Berkeley right now... the kind of general theme is research on building ever more intelligent systems. And what that means in practice is a lot of work on reinforcement learning, unsupervised learning and especially the junction of both of those and then also a lot of work on meta learning/transfer learning. That's kind of what we think about the foundational aspect in some sense. And then of course we love to look at robotics as a really good test demand, as a reality check. It's very easy to think you make a lot of progress, and then you try to see if a real robot can do something in a real environment that you quickly realize that the results weren’t great, but that's because the simulator was a lot simpler than the real world is ever going to be. And so we love robotics as our kind of reality check and also do a lot of work... now I'm kind of pushing AI to empower scientific and engineering work that might happen in other disciplines.

And so that's a big part of the push there, and then at Covariant, we have a very different perspective. So the research at a place like Berkeley and other research institutions is a lot about going from something that was never done before to all of a sudden doing something you know maybe with 20% success rate or 40% success rate and then zero to you know non zero kind of aspect. At Covariant we kind of solve these kind of successes in the space of robot learning, and we conclude its the right time to bring robot learning into the real world where real world for now means factories and warehouses, where robots can help out in a variety of situations if they are empowered by current AI, compared to more traditional automation, which is really about repeated motion and very simple kind of scenarios that can be automated.

But we really think that that's going to drastically... that is drastically changing already and that robots that can see that can adapt, can do so much more and help out so much more than than the blind robots of the past can possibly do.

So final question: if people want to keep up with you and all you're working on, what are the best ways to do that?

So the best ways are a couple of things. So Twitter @pabbeel is my handle and so I frequently post things there. Another place to look is my website. If you just Google my name Pieter Abbeel, my web site should come up pretty high. And that might be especially relevant if you are looking to kind of get up to speed in some of the domains that we talked about. There's links to the classes I've been teaching at Berkeley. There's a deep learning class link, there is a robotics class link, a reinforcement learning bootcamp, an unsupervised learning class. It's brand new just this past semester, first offering and there is kind of full stack Deep Learning bootcamp lectures. So there is a wide range of lectures and homework that's all available for anybody to kind of just come and learn from and then from there, take it to the next level.

Thank you so much for being on the show. It was a fascinating half hour.

Thank you Byron. My pleasure.