Episode 81: A Conversation with Siraj Raval

In this episode Byron and Siraj Raval discuss machine learning, teaching ai and the use of ai for the betterment of the world.

:: ::

Guest

Our mission is to offer a world-class AI education to anyone on Earth for free. Our doors are open to all those who wish to learn. We are a learning community that spans almost every country dedicated to teaching our students how to make a positive impact in the world using AI technology, whether that's through employment or entrepreneurship. We are now the largest AI community in the world!

www.youtube.com/c/sirajraval

Transcript

Byron Reese: This is voices in AI brought to you by GigaOm, I'm Byron Reese. Today my guest is Siraj Raval. He is the director of the School of AI. He holds a degree in computer science from Columbia University. Welcome to the show, Siraj.

Siraj Raval: Thank you so much for having me, Byron.

I always like to start off with just definitions. What is artificial intelligence and specifically what's artificial about it?

That's a great question. So, AI, Artificial Intelligence is actually… I like to think of it as a giant circle. I'm a very visual person so just imagine a giant circle and we'll label that circle AI, okay? Inside of the circle, there is a smaller circle, and this would be the subfield of the eye. One of them would be heuristics. These are statistical techniques to try to play games a little better.

When Garry Kasparov was defeated by big blue that was using heuristics. There's another bubble inside of this bigger AI bubble called machine learning and that's really the hottest area of AI right now and that's all about learning from data. So there's heuristics, there's learning from data which is machine learning and there is deep learning as well, which is a smaller bubble inside of machine learning. So AI is a very broad term. And people in computer science are always arguing about what is AI, what isn’t AI? But for me, I like to keep it simple. I think of AI as any kind of machine that mimics human intelligence in some way.

Well hold on a minute though, you can't say artificial intelligence is a machine that mimics human intelligence because you're just defining the word with what we're trying to get at. So what's intelligence?

That's a great question. Intelligence is the ability to learn and apply knowledge. And we have a lot of it. Well, some of us anyway (just kidding)

That’s interesting because of AlphaGo the emphasis on it being able to learn is a pretty high bar. Something like my cat food dish that refills itself when the cat eats all the food, that isn't intelligent in your book, right? It’s not learning anything new. Is that true?

Yeah. So it's not learning. So there has to be some kind of feedback, some kind of response to stimulus, so whether that's from data or whether that's a statistical technique based on the number of wins versus losses, did this work, did this not work? It's got to have this feedback loop of something outside of it being external to it is affecting it. In the way that we perceive the world, something external to our heads and that affects how we act in the world.

So [take] the smartest program in the world. Once it's instantiated as a single program, is no longer intelligent. Is that true? Because it stopped learning at that point. It can be as sophisticated as can be, but in your mind, if it's not learning something new it's not intelligent.

That's a good question. Well, I mean, the point at which it would not need to learn or there would be nothing for it to learn would be the point in which, to get ‘out there,’ it saturates the entire universe.

Well, no. I mean like, let's take AlphaGo. Let's say they decide, let's put out an iPhone version of Go and let's just take the latest and greatest version of this. Let's make a great program that plays Go. At that point it is no longer AI, if we rigidly follow your definition because it stopped learning, it's now frozen in capability. Yeah, I can play it a thousand times in a game 1001 it’s not doing any better.

Sure. Okay, but to stick to my rigid definition, I’ve said that intelligence is the ability to learn and apply knowledge.

Right.

That we will be doing in the latter part.

Do you think that it’s artificial in that it isn't really intelligence, it just looks like it? Is what a computer does actually intelligent or is it mimicking intelligence? Or is there a difference between those two things?

There are different kinds of intelligences in the world. I mean, think of it like a symphony of intelligences like, our intelligence is really good at doing a huge range of tasks, but a dog has a certain type of intelligence that keeps it more aware of things than we would be, right? Dogs have superhuman hearing capability. So in that way a dog is more intelligent than us for that specific task. So when we say ‘artificial intelligence,’ you know, talking about the AlphaGo example, that algorithm is better than any human on the planet for that specific task. It's a different kind of intelligence. ‘Foreign,’ ‘alien,’ ‘artificial’ you know, all of those words would kind of describe its capability.

You’re the Director of School of AI. What is that? Tell me the mission and what you're doing.

Sure. So I've been making educational videos about AI on YouTube for the past couple of years and I had the idea about nine months ago, to have this call to action for people who watch my videos. And I had this idea of saying, ‘Let's start an initiative where I'm not the only one teaching but there are other people, and we'll call ourselves The School of AI and we have one mission which is to teach people how to use AI technology for the betterment of humanity for free.’

And so we're a non-profit initiative. And since then, we have, what are called ‘deans.’ It’s 800 of them spread out across the world, across 400 cities globally. And they're teaching people in their local communities from Harare, Zimbabwe to Zurich to parts of South America. It's a global community. They're building their local schools, Schools of AI, you know, School of AI Barcelona, what have you, and it's been an amazing, amazing couple of months. It feels like every day I wake up, I look in our side channel, I see a picture of a bunch of students in, say, Mexico City and our school there, our logo there and it's like, “Is this real?” But it is real. Yeah, it's been a lot of fun so far.

Put some flesh on those bones. What does it mean to learn… what are people learning to do?

Right. So the guideline that we're following we're talking about the betterment of humanity are the 17 sustainable development goals (SDGs) outlined by the United Nations. One of them would be no poverty, no extreme poverty, sustainable action on the climate, things like that. Basically trying to fulfill the basic needs for humans both in developed and developing countries so that eventually we can all reach that stage of self-actualization and be able to contribute and create and discover, which is what I think we humans are best at. Not doing trivial laborious repetitive tasks. That's what machines are good for. So if we can teach our students, we call them ‘wizards,’ if we can teach our wizards how to use a technology to automate all of that away, then we can get to a world where all of us are contributing to the betterment and the progress of our species whether it's in science or art, etcetera.

But specifically, what are people learning to do like on a day to day basis?

One example would be classifying images, and that's a very generic example, but we can use that example to say, help farmers in parts of South Africa to detect plants that are diseased, or that are not diseased. Another example would be anomaly detection. So kind of finding the needle in the haystack. What here doesn’t fit in with the rest? And that can be applied to fraud detection, right? If you've got thousands and thousands of transactions, and one of them is a fraud, and AI can learn ‘what fraud is’ better than any human could because it's just so much data. That’s just two, I can get some more. There's quite a lot but I think that...

No, but I mean, what's the clue... so it’s the idea that there just aren't enough people that have the basic skills to "do AI" and you're trying to fill that gap?

That is what it is. And yeah, in that the concepts behind this technology, the mathematical concepts I don't believe are accessible yet to a wide enough audience. So we at School of AI are trying to broaden that audience and trying to make it accessible not just to developers but eventually to everybody. You know, moms, dads, grandmas, grandpas, people who just they’re not like the most technical people we're trying to reach them and make this something that everybody does, because we sincerely believe that this is going to be a part of our lives and eventually everybody is going to be implementing AI in some way or another.

It doesn't necessarily have to be code. It can be through some application or some kind of ‘drag and drop’ interface, but it's definitely in the future of work. So yes, that's what it is. And also it's the fact that we are facing so many huge problems, daunting problems as a species existential threats. And we think we might not be good enough alone to solve these problems. Climate change, for example: a lot of people think that it's too late to solve climate change, but we think that we have a huge amount of data available and we think that the answers to some of the hardest problems related to CO2 emission and how we can allocate resources for that goal lie hidden in that data, and using AI we can find them.

I want to get to all of that in a moment. I'm still trying to understand what a student actually learns to do doing this. Is there a structured curriculum you work through? Is it videos that you watch? Is it Coursera style with Andrew Ng's AI course? What does a ‘day to day’ look like for a student who's trying to absorb all this information?

It’s a great question. Right now we have several courses, and the most popular course that our students are into is my ‘Data Science in Three Months’ course. And that course, the three-month course takes people from beginner like never having coded before to being able to start applying to entry-level jobs as a data analyst or any kind of data related role. So that's the one right now, and inside of that course, we start off with a mathematical foundation for what data science is and involves concepts like probability, theory, and statistics.

And then we move on to applications like using a data set to, say, detect what parts of a budget are the best for a specific goal. So financial allocation, decision making, path planning, routing algorithms, routing. All the applications we use at this point (most of them) the major ones use AI at some point. So the question is now leaning towards what is not AI at this point, when it comes to the application that we're using in our day to day lives?

But is there like a curriculum you work through from beginning to end, and then some certification that sits at the end?

Yes.

Tell me what that certification is and how long does it take to get, and what does it, in your mind, qualify you to do?

Sure. It's a three-month curriculum called ‘Data Science in three months.’ And there's a midterm and there's a final. And we're grading it. It's me and a team of seven instructors and it's online at the School of AI and it's on my YouTube channel as well. The certification allows you to and we're just now starting to see some of these certifications come up that students are completing the course in a while but it allows you to start applying for jobs and then it would be the equivalent of just brand association.

You know we're trying to build a really solid curriculum course, a good brand, and then the idea is that these employers would say, “Oh! School of AI I know about that. It's a great course. Sure. If this person took it there, then they know what they're doing.” We’re definitely trying to keep the quality high. But it's also a decentralized school. We have 400 cities so it's not just data science in three months, that’s my curriculum that I created. But these deans are creating their own curricula in their own languages, in Spanish [for example]. Reinforcement learning is another course. [Alpha Go’s] ‘Move thirty seven’ you know, we just finished that, and decentralized application built technology that combines AI and blockchain together to create financial applications that were not possible before tokenizing assets, using AI to allocate how machines and humans pay each other and what that relationship looks like. But there is always a certification at the end of whatever curriculum it is, and jobs are definitely something that makes us happy to see our students get as an end goal, whether it's a job or a research position or working at an NGO, just bettering their life.

So your mission is to use AI to solve these problems. But I assume most of the students who consume the content are looking to acquire skills that make them more employable. Is it true or am I wrong?

Yeah, you are right, yes. Yes.

So what's the glue between all these people that are learning these skills and mastering that to advance the mission that you were talking about a little earlier?

As a society, we haven't properly approximated the relationship between value and capital yet. You know, there are companies that are putting tons and tons of sludge into the oceans and they're getting paid a lot to do that. That's one example. Meanwhile, there are people who are volunteering in their local communities, but they're not getting paid anything, but they should because they're benefiting other people's lives.

But we think with advances in blockchain technology and this idea of tokenization which has been such a crazy wild ‘dot com bubble’ kind of deal, we think there is some sliver of hope there that we can more closely approximate the relationship between value and capital. And that would be, in this case, creating a startup that would help with one of the sustainable development goals and be profitable. And we're starting to see that happen now. And that's that's where we're trying to go to. Looking for that perfect intersection between social impact and profitability.

You're saying that you're trying to do that as a company or you're empowering your graduates to do that?

Exactly. We’re empowering our graduates to do that.

One of the things that's made, in my view, AI advance so much, you hear all the normal things: we get more data and faster computers and all of that. But the toolkits are simply just becoming better and more robust and more numerable, and you've got all kinds of things that make it much simpler. But it's still reasonably hard to…

Right.

But the technology is changing so much. Do you think your courses have an inherent shelf life that they're all going to constantly have to be redone because just the tools are going to become so high level very quickly that just the amount of stuff people will have to know to use the technology should decline?

Yeah. No. Definitely, my videos, my courses, it all has a shelf life. You know, if I were to put a number on it, I would probably say like maybe two to five years, max.

Taking your 17 problem areas, tell me how you inspire us with how AI can be used to solve some of them.

Sure. So the SDGs as we so lovingly call them are very ambitious goals: No poverty, clean water, sanitation, and gender equality. One of them is quality education which is one that I'm particularly passionate about. And right now, education is an expensive endeavor. We need look no farther than our own country. There are a lot of students… I have friends who have a lot of student loans from college. It's not a good thing. It prevents them from fulfilling their full potential.

What we can do is we can use AI to provide a personalized education to people for a much lower cost. One example would be an AI teacher. It knows you. It learns from you. It learns what you like. You give us some feedback and it will learn what are your problem areas and then give you suggestions that are hyper-personalized. Now teachers have to spread their attention across the classroom, but an AI can focus, can do both: spread its attention and hyper-focus on the individual. So that would be one: an AI for education, a chatbot for education.

There are a lot of people who are concerned that the technology can be used equally efficiently to pursue goals that are far less noble than that. But how do you wrap your head around that?

Yeah. So I mean technology is like fire. It can be used to burn us or to give us warmth. A tool is just a tool. It all depends on how we use it. So one of our values at School of AI is ‘Choose love not fear.’ We have seven values. We're trying to not just embed this technology, and the facts are just facts. People can just go on Google and search whatever it is they need… Step 1 through 6 and there we go. You know, how to do it. But [what] we're trying to do is embed a set of values into this ecosystem that points people in the right direction of what to do with this technology.

As to your question of your supposition of people, somebody using AI for not as noble a cause: of course, that's going to happen. It's already happening in China and in the US and just all over the world. It's happening right now. What can we do about it? Well, we can become more aware of how it's being used to exploit people. And that's really the first step, and we're trying to do that. To make people aware of that and hopefully avoid any kind of disaster-like scenario.

So your mission, according to the website, is to offer a world-class AI education to anyone on earth for free. And yet, does your business, School of AI get revenue from another source or, you said it's a non-profit, does it rely on donations or how are you funding that mission?

Great question. I have my own business which is my YouTube videos, and currently, I am funding it all myself. So sometimes, maybe once a month I'll say yes to a potential client to do a sponsored video. The last one was Intel. And that's really the main revenue source. There are other YouTube ads and I don't really like the idea of asking for donations. I don't really think too much about it until we start getting some requests for funds from different chapters of ours, for swag and things like that, venue space. And I just gave it to them directly because I'm not trying to ask people for donations, it's not how I work. So I would rather just make that money personally and then just continue to fund the non-profit initiative.

Do you worry that that approach though, might slow your growth and leave out people who want to support it?

Do I worry that it would slow my growth? I am always worried about the fact that we're not growing fast enough so that worries me. But just in general, yes, I'm worried. And yeah, I mean, I'm sure there are people who want to support it and at some point we can start asking for donations, I'm sure. I just haven't had the bandwidth to set that up but I will. We will.

What are some of the challenges you have found when you have been doing this?

Good question. One of them would be just keeping everybody on the same page. We have deans that speak over 120 different languages, there’s cultural differences. It's a lot to try to unite the community. It takes a lot of time and attention and energy and focus from me on how to do that.

And so far one challenge has been not appreciating certain people in the community enough. That's my own fault and something that I've learned in the past three months is how to better value your team and how to empower your team in a better way than this. I'm just, I'm learning this. So that's been a big challenge working with people, basically working with people is going to be a challenge because it’s not something that I'm used to. I usually just make my YouTube videos by myself and I'm in my zone, but now it's not just me, so...

And, is this what you do with like all of your waking hours, or do you have other things that keep you occupied professionally or is this your sole endeavor?

This is my sole endeavor. Yes, this is my sole endeavor.

And you have something like half a million subscribers on YouTube or something like that?

Yeah, we're about to have half a million actually in like six hours, so I'm very happy.

And so, when you come out with… how often are you putting out new videos?

I do three a week. Three videos a week on my YouTube channel.

And, what would be some of the topics? Give our listeners some examples of what would be coming out last week or this week or next week.

Sure. So my latest video just came out yesterday. It's called ‘An AI that Dresses Itself.’ And a team at Google Brain and Georgia Tech created this animated character video that is putting a shirt on itself. It looks very realistic. It's kind of uncanny but it's very cool and it's actually also very difficult to understand. I mean, the paper was actually very difficult, but what I did was I synthesized how it works turned into explainable concepts and then showed the applications of how we could use this, which the paper didn't show because scientists aren't the best at showing the applications of what they discovered.

And so, in that case, it would be for gaming for fashion or elder care, you know. One example would be people with ALS. Over 30 million people suffer from ALS, which is a disease that you get when you're older which makes it so you can die from bending over, so you can also even die from doing a weird contortion. So perhaps we can have robots that help people in assisted living facilities dress themselves, which would give them a more dignified and independent life.

There are a lot of ways that we can use this technology that we didn't originally think about. It just takes a little bit of time and belief in the power of it to do good. And then one more thing: today, I'm actually really excited because I have a video on AI in China coming out in 10 hours which covers so many topics from social credit scoring to algorithmic policing to Confucianism in a whole different set of values and they're not Western, but they're still dominating the space. It's just really fascinating. It was really fascinating for me to really deep dive into China and what their intentions are, and why they're going so [all] in on AI compared to every other country including us.

You mentioned Social Credit. Explain that to our listeners and weigh in on what from an AI perspective they're trying to do and how they might go about it and all the rest.

Sure. So in China they have set up this system called a social credit scoring system, and any action that a citizen takes that is able to be recorded will be recorded, whether that's through surveillance cameras and there are a lot [of them] being installed thousands and thousands and thousands. Or booking a train, whether it's using a service like WeChat for example.

WeChat is an incredible app. We have nothing like it in the US. It's like WhatsApp on steroids. Not only can you message people, you can buy train tickets and invest in stocks and the social network and pay your mortgage every month. And so that gives the government a lot of data because they control WeChat so they can see literally everything.

So the social credit scoring system is the government's initiative to take all of your actions and compile it into a single number that represents your reputation. [That] reputation then governs what you are and are not able to do: everything from going to a certain place to like getting a certain discount on a product. And I mean we have it in the US like we have credit scores but it's nowhere near as Orwellian as it is in China.

And then the second part of your question was how is AI being used for surveillance for example. AI has been used to detect who you are, so facial detection that’s AI, the features of your face. It's learned how to detect features on the face and then using that to identify who you are, finding your social credit score and then watching you, see what you do and if you do the wrong or right thing depending on the values that they set, they'll change your credit score.

So you made an oblique reference earlier I think, to an artificial general intelligence just in passing. I think something about as long as we don't kill ourselves with it along the way or something like that. Obviously, the School of AI is teaching people the very nuts and bolts of how to… about narrow AI and specifically about machine learning. What do you think about a general intelligence? Do you think you'll live to see it for instance?

That's a great question. When I was traveling through India for the first time… I was born in the US, but my parents are from there, so it was a very interesting experience visiting there for six months within the entire country. I saw so much poverty and pollution. Things that I never saw growing up in the US, and it really struck me in that I just came to this conclusion of: ‘If we're going to actually solve these massive, massive problems in our lifetime, we're going to need an artificial general intelligence to do it. Something that's just thousands of times smarter than our entire species combined because these are some really hard problems.’

Now that I've spent about three years studying and teaching and all this stuff, I'm not as I guess, reliant on this external ‘AGI god’ to solve everything because I can see that really behind all AI there's a human. The human is the one who trains this model the statistical model. They decide what [the] data is going to be. They decide what the objectives for the model is going to be. The human intent is always going to be there. There will always be a human in the loop somehow.

And on a somewhat related note, four weeks ago I visited D-Wave [Systems] in Vancouver. I started learning about quantum mechanics, and now I’m not even so sure that consciousness is completely coming from the brain. Maybe there's some entanglement properties there. Maybe there is a universal consciousness. We don't know. I don't know. I'm not making any claims, but I am saying that I don't think reality is as simple as I thought it was before where it was like “oh, just create an AGI and it’ll solve everything.” We can't just offload that responsibility. It's got to come from us.

It's interesting you remarked about consciousness, because it seems you're implying that a general intelligence would have to have consciousness. Do you think that it’s the case… it would have to experience? It would have to be able to ‘feel warmth’ as opposed to [just] measuring temperature. Do you believe that to be the case?

Exactly. Exactly. I 100% believe that and we don't know...

That a general intelligence would have to become conscious to…

I 100% believe that, yeah. It would have to be conscious.

Why is that? Let me set the problem up a little better. People say we don't know what consciousness is. This isn't really technically true. We know what it is. We just don't know how it is that it comes about. What it is is the experience of being you. It is the fact that you experience the world. You feel warmth and a computer can only measure temperature.

Now some would maintain that intelligence and consciousness don't have anything to do with each other. For instance, we've all had the experience of driving and then you kind of space [out] and a minute later you ‘snap to’ [and] be like, “oh, I don't even remember driving.” While you were doing that you were acting intelligently, you were merging with traffic and signaling and changing lanes and doing all of that. You were able to act intelligently without experiencing the world.

Now there's a lot of problems with that, but it's trying to set up the question: “Could a zombie be intelligent?” Could something that just doesn't experience the world [be intelligent?]. Just a robot going through its day to day life as it were, might be able to mimic your behavior perfectly but never actually feel the emotions that you feel. Interestingly you say no, that robot could not mimic what I do unless it could experience the pain and heartache that I feel.

No. Okay well put. So let me just say this. So, yes a robot could mimic what I do without feeling consciousness in the way that I do. I do think that, but the reason that that's not contradicting what you said earlier, is that when I when I define AGI, it has to include consciousness for me because to me an artificial general intelligence is equivalent to a human intelligence in every way except in the medium that it lives on. So while we're biological, it could live on silicon or, you know, graphene or some other medium. But I wouldn't consider it general intelligence unless it encompassed every aspect of what we are.

But then at that moment, it would have rights the same as we have.

That's a great question. Yeah.

And then you couldn’t ethically program it to do anything, right? Because then all you've done is brought back a problem from our past, right? You can't essentially enslave...

You couldn’t? You could enslave it though. I mean, have you seen Black Mirror? You could enslave it though.

No. I’m just saying that ethically, if it experiences the world, you can't necessarily make it plunge your toilet, right?

You could but ethically speaking…

Ethically.

You're right. Ethically…

So the minute it becomes conscious, it actually becomes far less useful to us. Now I can’t send it in to defuse the bomb anymore.

Yes.

So you really don't?

Yeah. Well, if its values are misaligned with our own. Yeah, sure.

Well now if it has any value use at all you can't. It has rights co-equal with our own at that point, doesn't it?

If we created an artificial general intelligence that experienced consciousness in the way that we do, we would have absolutely no say in the future of anything. It would be a god because it could scale itself in a way that we couldn’t.

It's an interesting thesis. That's Nick Bostrom’s thesis basically that it’ll have an IQ of a hundred, then a thousand, then ten thousand, then a million, then... I don't think he puts quite that way, but the idea is it would eventually be "so smart" that it could no longer even… like we're not even on its radar. We don't even matter. Aside from that being the plot of a movie, what evidence do you have that that's even possible? For instance, your iPhone or whatever, your smartphone of choice doesn't exhibit any of that kind of behavior, right?

It does not. No.

You know we don't want to do this thing of reasoning from fictional evidence where we see Black Mirror enough and we say, “yeah, it could happen, that could really happen.” So what, as an AI person, do you think the argument is that that can ever happen?

Well, one theory of consciousness is that it just requires a sufficient level of complexity before consciousness starts to emerge. So maybe we could imagine consciousness as a spectrum and, you know, rats have a certain level. But if you up that then you get the dog and keep up that, you get the human. And right now computers are like below rats. You just gotta keep leveling up the layers of complexity so in terms of the models that we know about in AI, whether that's specifically a neural network, add more layers, more data and just create level after level of abstraction of hierarchy until we get to emotions and then we just keep going from there. And then maybe even extending past that so it can be even more conscious than we are, which could be the case.

Right. I mean could the internet already be there though? I mean, it…

The internet could. I mean, it's too decentralized. There's not really an overall order.

Fair enough. So do you believe that we, in the end, well, how do you net it all out? Do you think that we will end up building... When I ask you, do you think you'll live to see a general intelligence, you said, “I no longer believe it's necessary to solve all of our problems.” Do you think we're actually going to build one in your mind?

I think we're going to instead enhance our own intelligence, rather than create this external intelligence. And we’re doing that by...

So this trick we've learned that we've had good luck with recently that just machine learning says, “Let's take a lot of data about the past and let's study it to make projections into the future” right?

Yes. Yes.

Do you think that is gonna get us… like it only works when the future is like the past, right? The cat tomorrow looks like a cat today. Right? You take stuff about the past data. You study it, you make projections into the future. That's all it does. And [for] some things, that's easy to do like chess. We have a lot of data about the past. The rules of the game don't change and so we can project into the future.

But there's a lot of things that don't necessarily behave that way. So is machine learning really all that extensible of a trick? For instance, could machine learning solve for the Turing test, in your view?

So, at Google's last Google IO, they demo’d their barbershop. AI that like, in the entire full pipeline of calling up the barber shop, scheduling the appointment, saying “ummm” to like mimic like…

Right. But if you ask it, “What's bigger: a nickel or the sun?” it would know.

True. Okay. So, a general Turing test.

Right. Anything. Like I could say this: Dr. Smith was eating lunch at his favorite restaurant when he got a phone call, looking worried, he jumps up and runs out the door forgetting to pay his tab. Will management prosecute him? Would they? Do you think?

That's a good one.

He's a doctor. He just got a call and ran out without paying his bill. Are they gonna call the police? It’s his favorite restaurant so they know him there, right?

I don't think they would call the police.

Right. So how much would an AI have to know to be able to answer that like, “Oh, he's a doctor. That means he probably got an emergency call. Oh, that would make sense.” That's why he got worried and ran out. Oh, he comes there a lot. And then like so many layers of knowledge. Do you think there is enough data, there's enough words in the world, in the universe that an AI could just study like every prior conversation and answer that question?

I mean, in theory, yes. I mean, we're doing the same thing, right? We're making these predictions based on past data. We don't have access to future data.

Well, no, but we do something machines can't, which is we do transfer learning effortlessly and they don't. Let's say you're out fishing in a river and you catch a trout. Okay? And then you go home and you’re a scientist and you put that trout in a jar of formaldehyde. And then a week passes and I ask you the following questions I'm going to ask you. So consider the fish when it was in the river. Then consider the fish now. Is it at the same temperature?

Probably not.

Is it the same weight?

Probably not.

Does it smell the same?

No.

See I could go all day long, and you've never done that. But you would know exactly what attributes transfer over which ones don't. But you can do it across any number of things. And so I've always just felt that how we do conversation, just doesn't look anything like machine learning. That's why machine learning can't, why every Turing test candidate I've ever seen when I say what's bigger a nickel or the sun, none of them have ever done it.

But people get it instantly, you know, a child can do it. People can do this thing. It's very interesting like if you show a little kid four photographs of cats. A little kid, a four-year-old and then you're out walking around and you see one of those manx cats, you know, the ones without the tail. That kid will say, “Look! There’s a cat without a tail.” And yet they never told him there was such a thing as a cat without a tail. Every cat that you saw had a tail. And yet we, somehow come pre-wired to do all of this. I wonder if machines are able to do so. Do you think with a big enough machine learning model studying a big enough corpus of data, you could pass the Turing test?

Yes. That's the short answer. But also to that point you're making about transfer learning in children: yeah, we definitely have some pre-wired evolutionary traits in our head and right now a lot of the times when we're creating these AI models, we randomly initialize the weights or the starting point of the model itself. Bunch of ones and zeros.

But ideally, we have a smarter weight initialization period, which is kind of analogous to us being born, like we don't have a random weight initialization in our head. We have a smarter way of initialization based on, you know, I could have arachnophobia because my ancestor almost got killed by a spider or whatever. We definitely need to build better transfer learning techniques. I think smarter weight initialization is one way forward there.

Don’t you think it's interesting that we don't know how thoughts are encoded in the brain? Like if I said, “Hey Suraj, what color was your first bicycle?” You could probably answer that. There's some part of your brain that stores the colors of bicycles or there's no place in your brain where that's stored. Computers just always struck me as so different than a person, than a brain, that I even think calling them ‘neural nets’ is just kind of a cheat because I don't think the brain behaves anything like that. You even say the computer ‘thinks’ when I don't even know computers think or not. I think you seem more optimistic about these techniques in mapping human ability through them than I am, but I will I will assure all of our readers that you're very much in the majority on this point and I'm very much in the minority.

But how special people are… human creativity. Do you get that from machine learning?

You can get creative outputs. There are projects, like Magenta by Google that are trying to create models that will generate music compositions.

Yeah. You can treat them like Bach and they can make passible Bach and you can feed them a photograph and they'll make it impressionistic or something, but do you actually see a computer writing you know the Harry Potter novels?

It’s definitely possible. I mean, we’ve already used AI to create TV scripts. The words acted out by that Sillicon Valley dude which I mean, okay. Yes. It wasn't that great a script, I'll admit, but that was like a starting point. I want to clarify that I do think that there should and will be a human in the loop. So when it comes to the creative process, we shouldn't be thinking about it like the robots are gonna be more creative than us. It's a co-evolutionary process.

One example would be my friend Taryn Southern. She created the first pop album using AI and completely using AI. She would suggest a certain pattern of beats to the system and then it would modify it and suggest something in a different way to her that she didn't think about before, and then she would use that to then go back and suggest something else. So it's just back and forth co-evolutionary creative process that we're going to start seeing more of, not just in the music domain but any kind of artistic or creative domain.

So you don't believe humans exhibit any behaviors that aren’t… See, I have always thought that machine learning is really not a particularly broad tool that it works for certain kinds of problems very well. You know, like games where there’s a constrained universe and there's a finite number of choices, but I've always suspected we're going to need completely different techniques to get a lot of human capabilities mapped over. So do you think there's any human behavior, and I'm not calling consciousness a behavior, any human ability that your gut tells you we won't be able to replicate using this trick: study data about the past projected into future?

My gut tells me that no, there is nothing that we won't be able to recreate. Yeah, except what you talked about: your consciousness.

All right. Well, I think we're coming up on time here. So let's look at two things. One, how do people find out more about School of AI and how do people find out more about you and what you're doing? Start with School of AI. Where do they go?

It's the website at ‘TheSchool.ai’ and then for me, if you just search Siraj on YouTube, you’ll find my channel. Siraj Raval is my name. And, yeah, you have released three new videos every week.

All right. Well, Siraj, it has been a fascinating 50 minutes chatting about all of this. And I wish you the best of luck. It sounds like a noble pursuit.

Sure. Thanks so much, Byron.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.