Episode 82: A Conversation with Max Welling

In this episode, Byron and Max Welling of Qualcomm discuss the nature of intelligence and its relationship with intuition, evolution, and need.

:: ::

Guest

Max Welling is a research chair in Machine Learning at the University of Amsterdam and a VP Technologies at Qualcomm. He has a secondary appointment as a senior fellow at the Canadian Institute for Advanced Research (CIFAR). He is co-founder of “Scyfer BV” a university spin-off in deep learning which got acquired by Qualcomm in summer 2017. In the past he held postdoctoral positions at Caltech (’98-’00), UCL (’00-’01) and the U. Toronto (’01-’03). He received his PhD in ’98 under supervision of Nobel laureate Prof. G. ‘t Hooft. Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015 (impact factor 4.8). He serves on the board of the NIPS foundation since 2015 (the largest conference in machine learning) and has been program chair and general chair of NIPS in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair of MIDL 2018. He has served on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing, JCGS and TPAMI. He received multiple grants from Google, Facebook, Yahoo, NSF, NIH, NWO and ONR-MURI among which an NSF career grant in 2005. He is the recipient of the ECCV Koenderink Prize in 2010. Welling is in the board of the Data Science Research Center in Amsterdam, he directs the Amsterdam Machine Learning Lab (AMLAB), and co-directs the Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA). He is the co-founder of, and VP Technologies at Qualcomm

Transcript


Byron Reese: This is Voices in AI brought to you by GigaOm, and I’m Byron Reese. Today my guest is Max Welling. He is the Vice President, Technologies at Qualcomm. He holds a Ph.D. in theoretical physics from Utrecht University and he's done postdoc work at Caltech, University of Toronto and other places as well. Welcome to the show Max!

Max Welling: Thank you very much.

I always like to start with the question [on] first principles, which is: What is intelligence and why is artificial intelligence artificial? Is it not really intelligent? Or is it? I'll start with that. What is intelligence and why is AI artificial?

Okay. So if intelligence is not something that's easily defined in a single sentence. I think there is a whole broad spectrum of possible intelligence, and in fact in artificial systems we are starting to see very different kinds of intelligence. For instance you can think of a search engine as being intelligent in some way, but it's a very different kind of intelligence obviously as a human being, right?

So there’s human intelligence and I guess that's the ability to plan ahead and to analyze the world, to organize information—these kinds of things. But artificial intelligence is artificial because it's sort of in machines not in human brains. That's the only reason why we call it ‘artificial.’ I don't think there is any reason why artificial intelligence couldn't be the same or very similar to human intelligence. I just think that that's a very restricted set of intelligence. And we could imagine having a whole broad spectrum of intelligence in machines.

I'm with you [on] all of that, but maybe because human intelligence is organizing information, it's planning ahead, machines are doing something different like search engines and all that. Maybe I should ask the question: What isn't intelligence? I mean at some point, doesn't it lose all its meaning if it's like it's kind of… a lot of stuff? I mean like what are we really talking about when we when we come to intelligence? Are we talking about problem solving? Are we talking about adaptation or what? Or is that so meaningless that it has no definition?

Well yeah, it depends on how broad you want to define it. I think it's not a very well defined term per se. I mean you could ask yourself whether a fish is intelligent. And I think a fish to some degree is intelligent because you know it has a brain, it processes information, it adapts perhaps a little bit to the environment. So even a fish is intelligent, but clearly it's a lot less intelligent than a human.

So anything I would say that has the purpose of sensing—sort of acquiring information from its environment, computing from that information to its own benefit. In other words, to survive better is the ultimate goal or to reproduce maybe is the penultimate goal. And so basically, once you've taken any information and you compute then you can act—use that information. You can then act on the world in order to bring the world in a state that's more beneficial for you, right? So that you can survive better, reproduce better. So anything that processes information, I would say in order to reach a goal, in order to achieve a particular goal which in evolution is reproducing or surviving.

But… in artificial systems it could be something very different. In an artificial system, you could still sense information, you could still compute and process information in order to satisfy your customers—which is like providing them with better search results or something like that. So that's a different goal, but the same phenomenon is underlying it, which is processing information to reach that goal.

Now, and you mentioned adaptation and learning, so I think those are things that are super important parts of being intelligent. So a system that can adapt and learn from its environment and from experiences is a system that can keep improving itself and therefore become more intelligent or better at its task, or adapt when the environment is changing.

So these are really important parts of being intelligent, but not necessary because you could imagine a self-driving car as being completely pre-programmed. It doesn't adapt, but it still behaves intelligently in the sense that it knows when things are happening, it knows when to overtake other cars, it knows how to avoid collisions, etcetera.

So in short, I think intelligence is actually a very broad spectrum of things. It’s not super well-defined, and of course you can define more narrow things like a human intelligence for instance, or fish intelligence and/or search engine intelligence or something like that, and then it would mean something slightly different.


How far down in simplicity would you extend that? So if you have a pet cat and you have a food bowl that refills itself when it gets empty…it's got a weight sensor, and when the weight sensor shows nothing in there, it opens something up and then fills it. It has a goal which is: keep the cat happy. Is that a primitive kind of artificial intelligence?

It would be a very, very primitive kind of artificial intelligence. Yes.

Fair enough. And then going back centuries before that, I read the first vending machines, the first coin operated machines were to dispense holy water and you would drop a coin in a slot and the weight of the coin would weigh down a thing that would open a valve, then dispense some water and then, as the water was dispensed, the coin would fall out and it would close off again. Is that a really, really primitive artificial intelligence?

Yeah. I don't know. I mean you can drive these things to an extreme with many of these definitions. Clearly this is some kind of mechanism. I guess when there is sensing and this can sense, there is a bit of sensing because it's sensing the weight of a coin and then it has a response to that—which is opening something. It's like a response and sort of completely automatic response, and humans actually have many of these reflexes. If you hit your knee with a hammer, with a paddle of a hammer like the doctor does, your knee jerks up, so that's actually being done through a nervous system that goes to… doesn't even reach your brain. I think it's down here somewhere in your brain in the back of your spine. So it's very, very, very primitive, but still you could argue it senses something and it acts. It does something, it computes something and it acts. So it's like the very, very most fundamental simple form of intelligence. Yeah.


So the technique we're using to make a lot of advances in artificial intelligence, now that computers is machine learning, I guess it's really a simple idea. Let's study data about the past. Let's look for patterns and make projections into the future. How powerful is that technique... what did you think are the inherent limits of that particular way of gaining knowledge and building intelligence?

Well, I think it's kind of interesting if you look at the history of AI. So in the old days, there was a lot of AI which was hard coding rules. So you would think about what are the all the eventualities which you could encounter. And for each one of those, you would sort of program a response as an automatic response to those. And those systems did not necessarily look at data in large amounts from which they would learn patterns and learn to respond.

In other words, it was all up to humans to figure out what are the relevant things to look at, to sense, and how to respond to them and if you make enough of those, actually a system like that looks like it's behaving quite intelligently and actually still I think nowadays, self-driving cars… a large component of these cars is made of lots and lots of these rules which are hardcoded in the system. And so if you have many, many of these really primitive pieces of intelligence together, they might look like they act quite intelligently.

Now there is a new paradigm which is: it's always been there, but it's been basically becoming the dominant mainstream in AI. The new paradigm I would say, which is: ‘Well, why are we actually trying to hand code all of these things which we should sense in there by hand because  basically you can only do this to the level of what the human imagination actually is able to come up with, right?”

So if you think about detecting some… let's say if somebody is suffering from Alzheimer’s from a brain MRI, well you can look at like the size of your hippocampus and it's known that that thing shrinks—that organ shrinks if you are starting to suffer from memory issues which are correlated with Alzheimer’s. So that a human can think about that and put this in as a rule, but it turns out that there's many, many more far more subtle patterns in that MRI scan. And if you sum all of those up, then actually you can get a much better prediction.

But humans, they wouldn't be able to even see those subtle patterns because it's like if this brain region and this brain region and this brain region, but not that brain region, would sort of have this particular pattern. Then you know this is a little bit of evidence in favor of like Alzheimer’s and then hundreds and hundreds of those things. So that humans lack the imagination or the sort of the capacity to come up with all of these rules. And we basically discovered that just provide a large data set and let the machine itself figure out what these rules are instead of trying to hand code them in. And this is the big change for instance with deep learning as well, [as] in computer vision and speech recognition.

Let's first do computer vision. People have many hand coded features that they would try to identify on the image. Right. And then from there they would make predictions or for there's some whether there was a person in the image or something like that. But then we basically said, “Well let's just throw all the pixels, all the raw pixels at a neural nets. This is a convolution of neural net and let the neural nets figure out what are the right features. Let this neural net learn what the right features are to attend to when it needs to do a certain task.” And so it works a lot better, again because there's many very subtle patterns that it now learns to look at which humans simply didn't think of to look at—they seem to look at these things.

Now another example is the Alpha Go, maybe. In Alpha Go something similar happened. Humans have analyzed this game and come up with all sorts of rules of thumb for how to play the game. But then Alpha Go figured out things that humans can't comprehend, it's too complex. But still it made the algorithm win the game.

So I would say it's a new paradigm that goes well beyond trying to hand code human invented features into a system and therefore it's a lot more powerful. And in fact this is also the way of course humans work. And I don't see a real limit to this, right? So if you pump more data through it, in principle you can learn a lot of things—or well basically everything you need to learn in order to become intelligent.


Okay, there are two whole different threads I want to take [for] this. So one of them is what you just said, “this is the way that humans work.” I'm going to come back to that statement, so keep that fresh in your head. But if the way you explain this, the way you talk about with the brain… if this section of the brain, and this but not this, and this but this and this and in your words, hundreds of things all come together to suggest that this person may have dementia. Your point is well taken that a human could never have figured that out.

But isn't it by definition therefore also the case a human can't understand it, and therefore those sorts of systems are inherently unexplainable? And without Go specifically, when it made the move which was highly regarded as… well anyway some of the moves they don't know why Alpha Go made them. It's like we can't quite… there isn't explainability built into that system.


Yeah I think that's an excellent question. So one response from me would also be like we often don't even understand why humans make certain decisions. So if you are out to buy a new home, you visit a whole lot of these homes and you look and you feel how that home feels to you and maybe you have a certain list of things you want to check. But mostly you're taking this decision very intuitively, and if you're then asked why did you take this decision, you will come up with some reasons but it's often [true] that they're not the actual reason why you would make the decision. Researchers have compared people who make these decisions intuitively versus trying to approach it logically, and typically you make worse decisions if you really try to sort of make these decisions logically.

So I think even humans don't understand really how and why they make these decisions at an intuitive level. So that's one part of the equation. So yes, when we build very, very complex algorithms that look at this sort of complicated set of patterns, then we may have to give up on trying to completely understand how a decision was reached. What we can do is try to come up with a proxy for it. So we could say, well we tried to explain in human language the most important reasons why you made that decision. If we can ask the algorithms to do that, I think that would be quite successful. That would be quite similar to asking a human being or a doctor when the doctor makes a diagnosis. You can tell me why I made that diagnosis. And then we'll come up with reason, some explanations, but it might not be the whole picture.

The other option would be… and this actually might become quite necessary because of new legislation in Europe on privacy and explainability that you say, “Okay well maybe I'm just trading in a little bit of performance or quite a bit of performance in favor of full explainability.” So you then have a model that is quite simple. It doesn't look at all these complex patterns, but in fact it is completely explainable.

So there is this tension, I agree with you, but I think we also need to learn to accept that we don't need to understand everything that we use in our lives. We step into an airplane… I don't know how an airplane works but I still sit in an airplane and I trust it—not because I want to completely understand how the airplane works, but I've seen so many airplanes fly and they don't fall out of the air. So that gives me some trust that maybe this thing just works and I can just trust it without completely understanding it.


I'm in full agreement with everything you said and I think personally, it would be a bit of a shame if there were two teams trying to come up with an AI solution to something. And one of them said your explanation must be explainable, and the other one says we don't care, it just needs to be right. Over time you would expect that latter team would pull ahead and kind of constantly do better.

Yeah.

So I want to come back to the comment you said that this is the way humans work because I think that's a really interesting—and I would argue—an open question because I don't know that humans are data processors like that. I don't know that human creativity for instance is something necessarily that. I don't know if you can study enough novels [or] feed enough novels into a neural network that eventually produces Harry Potter.

I don't know if you can give it enough musicals and someday it makes Hamilton. And that isn't to appeal to anything unscientific—merely to say that this really narrow way of ‘Let's take a lot of data and study it and come up with projections,’ that that is the ‘be all and end all’ to how humans do what we do. So how far would you take that, that is, how humans work?


Okay. So the first thing I want to say about that is, the way you build an intelligence is actually by creating understanding about the world. So you can think of that as in humans this is the process of abstracting. So there's the pixels of an image or the sound, the audio of speech. But we don't hear audio, we hear words and we see objects and things in the world. So those are all abstractions and concepts that we have maybe partially… I think most of this we have learned from looking at the world, and you could argue… our brains are structured in a certain way so that they can easily learn these things. This is well established for language.  We have a brain or part of our brain which is structured in a way so that we can easily pick up on a language.

However, if we are born in one language and raised in one country versus another country, we can pick up very different kinds of languages. So there is something we get born with which is some architecture in our brain and the learning algorithm—that being created over evolution, but even evolution is a slow learning process, by the way. Let's say we're talking about sort of within a single individual. We have some prior structure that helps us learn fast. But then in fact when we start to learn, it's not just input, some emergent output, some kind of decision or prediction.

So it goes to the process of understanding the world, and only when we sort of come up with this deep understanding of the world is when we can start to generalize away from the things that we've been encountering. So let me go back to the novel. So you read like 20 novels in your life or 100 or 200 or whatever. And from there a talented individual will have to create now a completely new novel or be creative. You can't do that by current techniques, by just throwing in all the words into and… recombining them into sort of new sentences or something like that.

And you have to get to a level of understanding and abstraction and appreciation which you can think of that exists very deep in these neural networks after many, many layers or deep in your brain, where all these abstractions form, and from those abstractions you can create truly new things, and deviate from the things that you have seen. And we call that process generalization, which is in a very simple form similar to: you've seen a hundred chairs… but if  there comes a completely new chair you've never seen before, you still know it’s a chair  because it falls into this abstract concept of it of a chair, even though the details are something that you have never seen before.

So I think this idea of this abstraction and which is which is basically understanding how the world functions, the physics of the world, the psychology, the sociology of the world which are very high level concepts, that's where really I would say… once you have gained that very deep understanding is when you can start to generalize and create new things.

So you don't think there's any real impediment to a computer being a general intelligence down the road and to be creative and all the rest?

No, I don't think there is any of that impediment. I should also start saying that I think we are not nearly close. So often people overestimate how close we are to that goal. So I think we are very far away. But I don't see any fundamental principles problem with creating such an intelligence in the long run because after all, if you think about a human, in some sense it's also just an information processing system. We sense things, we compute things, we act in the world. And it's just a very complex one and there's no reason why you couldn't replicate that in a machine.

Going back to creativity which might be interesting… So sometimes people say “Look, being creative is something very magical, that's not something that we could ever create in a computer.” But I don't agree because in fact we could be very, very surprised by computers at some point. So creativity is really combining elements of things that you've learned in the past in very surprising new ways… recombination of modules into new ways that are very surprising to others. Now, we are reasonably good at this as humans. But I would say that, there's no reason why they couldn't do that and couldn't do it a lot better even than humans.

And maybe the first sign of that was that Move 36 or 37 in Alpha Go in one of these games, where you could think of that and actually Go players think of that as a very creative move. And so it was a maybe a combination of things that hasn't been tried before. And so again, here you have to sort of define creativity. What it precisely means is also a bit of a fluid concept, but I see no reason why computers couldn't have that.

Well it is true that when that move was played, that was the moment people talked about Alpha Go being creative. Yeah, I mean even Lee Sedol said it was a beautiful move. I find it interesting that we have brains that by all accounts we don't understand. We don't understand really how a thought is encoded, how it's retrieved, how our brains work.

And then we have these minds which I like to think of [as] kind of everything that the brain does that seems kind of mysterious, like a sense of humor. My liver doesn't have a sense of humor, my stomach doesn't have a sense of humor, yet somehow my brain does and so we've got these minds we don't understand.

And then we have consciousness which is, we experience the world, we don't measure it. All a computer can do is measure temperature. It doesn't know anything, it doesn't understand anything, it doesn't experience anything.

So we have brains, minds and consciousness and we don't really know how any of those work. But it seems and I assume you would agree with this based on what you said that the sole reason we believe we can build it mechanically is we believe that in the end humans are machines; that all of those processes I just described must be mechanistic. And if they're mechanistic, we'll be able to build them or a proxy for them. Would you agree with that?

So with the latter statement I agree. But the thing I don't agree with is that a computer doesn't understand or a computer cannot have a mind. And even whether you know an artificial intelligence couldn't become conscious at some point. So to me, consciousness is an emergent property of a highly complex system. And it may have just arrived in humans for very good reasons, you know through evolution or maybe because we have a body or you know there could be reasons why in the particular type of intelligence that humans have, a consciousness is actually a very beneficial thing to have. It could even be a side product. But I would believe that less. I would think it’s very sort of evolutionary, beneficial thing to have—a consciousness. It would probably make you generalize better or make better decisions.

And I don't see any reason why such processes couldn't emerge in a sufficiently complex artificial intelligence. They don't have to, right? I think we can create all sorts of very, very complex intelligences without possibly a consciousness. Maybe there's other things there that we don't even fathom, and something completely different than consciousness. But it doesn't exclude… I don't think, the fact that we could in artificial intelligence create something like a consciousness. It’s just very hard to measure.

It's just like asking what's happening inside a black hole. It's a question you can never get an answer to. In this case, why are you going to ask an artificial intelligence: ‘Are you conscious?’ And if that's an artificial intelligence that decided that it wanted to at least show consciousness, it would say “Yes.” And it would give the right answer to all your questions, but whether it's truly conscious or not—we would never really discover. But I could ask the same question about you. I don't know whether you’re conscious or not. Maybe you're acting like you're conscious and maybe I'm the only conscious person in the world. I don't know. So that's a matter of trusting.


But you're entirely right that thousands of years of philosophic thought have not answered that question. And I mean that was what, in the end Descartes said “All I know is that there's me. I don't know anything else.” But then I wrote a book on whether computers can become conscious and at issue is: by what mechanism are humans conscious? And there isn't agreement on that.

People say we don't know what consciousness is… we know what it is, we can define it. We just don't know how humans are [conscious], and I figured I counted eight different theories of where consciousness comes from, and I think four of them would allow a computer to be conscious and two wouldn't and two [say] it's unknown. But do you know the Chinese Room problem from Searle?

Yeah.

Okay. Let me just set this up for the reader real quickly and I would love your thoughts on it. So this is a thought experiment and it boils back to Max's understanding where Max's point where he said he took issue with my remark that computers can't understand anything. And so Searle’s setup was that there is this person—a librarian—who's in this giant room full of books and the librarian doesn't speak any Chinese. That's the important part, and outside of the room are a native Chinese speakers and they write questions in Chinese and they slide them under the door. And the librarian who doesn't speak any Chinese, picks them up and looks at the first character in these questions and goes and finds the book with that on the spine, pulls that book down, looks up the second character, it directs him or her to a third book a fourth book of that book, all the way until they get to the last character and the book says “Write this down.”

And so they copy these the symbols that again they don't understand the symbols. Copy them down then carefully slide them back under the door and the Chinese speaker outside the room picks it up and reads it and it's a brilliant answer in Chinese. And so the question Searle poses is “Does the librarian understand Chinese?” Now to bring it home, the punchline of the story is obviously that's all a computer can do. It can just follow a program which is all the librarian’s doing, and to be clear that room passes the Turing test. Right? Like that Chinese speaker outside assumes that a Chinese speaker is inside. But to most people listening to this show, if you said “Does that librarian understand the Chinese?” Most people would say “No.”


Yeah, so my answer to that is in order to play that game, you need an infinite amount of compute. You need like a ridiculous amount of compute to run that particular process. And while that may in the future using quantum computers whatever may be possible. That's a solution that doesn't require understanding. So this is the interesting part, so that the solution is perfectly valid from an input/output point of view. That's an intelligence that doesn't develop a deep understanding of the world because it doesn't need to, because it has an infinite amount of compute power to solve it the way it was solved by the Chinese Room.

However we don't have an infinite amount of compute in our brain. In fact it's very, very restricted, and evolution has put enormous pressure on our brains to keep them as computationally efficient as possible. And we need to eat in order to feed that brain. Our survival depends on that. Are we able to get enough food in our body in order to make this brain function?

Under these highly constrained environments like having a very finite amount of compute, there is a shortcut to this problem, and the shortcut is not to do it the way it is done in this Chinese room, but the way is to understand the world at its core, to understand the processes, the abstractions, the concepts that make up our world, because we once we can simplify and categorize this world in its elementary particles, I would say from there we create understanding, and we can actually give answers. These may be approximate answers, but from there we can do exactly the same thing that happened in the Chinese Room, but with a lot less compute power involved.

So the thesis would then be actually understanding the world and making these abstractions is the process by which you can do these jobs with a lot less compute power. And so in some sense you could argue understanding and consciousness could have come out of that constraint—that we need to solve these problems on very limited resources, and therefore it goes down to my thesis that not every intelligence has to be conscious, it might just be a byproduct of something, and in this case just a byproduct of trying to solve these problems in the cheapest possible way.

Well then let me amend the question slightly. I want to open a new bank account at a bank and I go to their website and there's a chat bot, bank bot, and I say, “Bank bot, what interest rate are you offering?” A bank bot says “Right now we're offering 3.3%.” And like what's the minimum deposit I have to put…? And bank bot says “$10,000” and “How long?” and bank bot answers every one of my questions. Does the bank bot understand my questions?


Not necessarily right, so it really depends on… first of all it's not well-defined what understanding means. So that's the first part that with many of these questions, we first have to be very precise with what understanding means and it's a very slippery concept. To me there is no such thing as understanding because that's in some experience that we have. It is much much more real measurable quality is you know can… which class of problems can I solve outside of the data?

How can I generalize with the minimum amount of data and a minimum of modern compute power? And I would say… I would even define understanding in some form or other as how efficient are you in sort of generalizing with minimal compute power, and depending on the minimal amount of data. So how that understanding comes to our consciousness? We can feel that we understand the world, because we see objects and we understand abstractions or we can explain or answer in some form or other.

There could be another artificial intelligence which solves it in a similar way. It also forms these abstractions but it's never conscious of doing that, but it will exactly give the same answers. And so whether one is understanding and the other one is not understanding or experiences that it is understanding or not. All these things are very slippery in my mind in order to define them. But what we really care about is, okay, there is a particular task that this robot needs to solve. These could be very narrow, or it could be very broad—which is: survive in a constantly changing environment. And if you can do this with minimal amount of compute power, I think you can argue that that requires some level of understanding of your environment. Otherwise you couldn't actually generalize enough.

Fair enough, and I’ll only ask you one more question along these lines and I would love to hear about what you do on a daily basis. But my question is this: If you say “Well computers can or will be able to understand, and there's no reason they can't be conscious, there's no reason they can't experience the world,” then what you say at some level is, “there's no reason they won’t be able to feel pain?”

Exactly.

Historically if something's been able to feel pain, we say well it has certain rights to begin with. It has the right not to be tortured. So how do we wrap our heads around that? How do we? Is it moral to build a robot to do the chores around your house? If that robot may actually be experiencing like not wanting to wash your socks?


Yes, these are really interesting questions. I agree with you. So if you build a robot that can suffer, whether it’s pain or other kinds of suffering, right? Then that robot may have rights like we give rights to dogs. We don't want dogs to suffer. So we give them rights. You shouldn't act, you know you're not allowed to abuse dogs.  And if you think of it as an alien of some kind… if we discovered that they actually suffer, then we may not want to turn them off or do their chores, but just to make sure, we’re nowhere near to such a situation.

So it's like this would be very, very distant future science fiction philosophy that we're doing. But that's fine. That's interesting, but the solution to this could be ‘well don't make robots that suffer.’ Right? If you need a robot to do the chores in your house, just design it in such a way that it doesn't experience suffering and then therefore it’s just happy, design it in such a way that makes it happy doing the chores in your house.

So you are Vice President, Technologies at Qualcomm. Tell me what that looks like on a day-to-day basis and then entice us with some of the cool stuff you are that you get to work on that we would be fascinated by.

Yeah. So I'm two days a week, or half of my time basically I'm at Qualcomm fulfilling this role, and sort of helping with the AI strategy, determining the AI strategy for the company. So this came out of the acquisition of a startup that we had in the Netherlands and so we have an office in the Netherlands now, which is the R&D office for AI and machine learning for Qualcomm.

And what I find interesting is actually to link with what we just said. One of the things that Qualcomm is really concerned with is power efficiency. Our brains are extremely power efficient, and we evolved to become very power efficient. And I strongly believe that true understanding and the forming of abstract concepts and stuff come from these constraints. And that's actually where Qualcomm is really the leader in—which is turning AI into… or thinking about AI solutions which are really, really power efficient, and which run on chipsets which are really, really power efficient.

So that's very fascinating to me. It brought me back to new problems that I hadn't thought about before which is: ‘What is actually the nature of computation? How do we compute things and do we necessarily compute things in high precision or could we compute things in much lower precision especially when we think about human intelligence, which is much more related to neural network processing?’

So it's well known that neural networks can tolerate an enormous amount of noise and perturbation and still perform well. So you know our chipsets the way we usually think about computation actually is at very high precision. And so maybe that's not necessary for these types of tasks, for these types of AI workloads. And so within Qualcomm, we're thinking about ‘okay how can we make these computations a lot more power efficient so that they can run these workloads more efficiently?’

So that's this tight interaction between the nature of computation itself and algorithms. That to me is extremely fascinating, and that's also driving quite a bit of the research that's happening within Qualcomm. For instance we develop tools to take a very large neural network for a particular task and then compress it down to a much [more] linear architecture that basically can perform exactly the same way as the bigger one, but actually runs much more efficiently on your phone or on another sort of a mobile device.

And we also learn algorithms to deal well with quantization. So if we compute with only 8 bits or 4 bits precision instead of 32 bits of floating point precision, can we still run these neural networks at a much more much higher power efficiency? But again equally accurate as the original sort of architecture. So that's a couple of things that we do where AI, machine learning, deep learning directly interact with computation and chips and hardware and stuff like that.  Did that answer the question?

I think we have a connection problem.

I'm sorry. That was my fault. I like it this little bit. Let me make a quick note, [00:44:36.00] So do you think we're going to put chips and everything like the way science fiction talks about where my fork is gonna measure every bite I eat? The caloric content in my pan is going to take botulism and report it back to my phone. I mean are we really going to like bring everything alive? Would chips be embedded in everything, do you think?

Well that's certainly a trend. So there's IoT, the internet of things. Many, many smaller and smaller devices are being embedded in more and more things around us starting with our cars and our homes. But many more smaller things, our utilities and our furniture are going to be more [connected], and more things are becoming intelligent clearly, and whether there is a limit to that.

I don't know. I think we can go quite far still with that, but I would also say that privacy is a real concern in this respect. So we will have to find better solutions to guarantee our privacy because if everything around us is measuring us, then basically everything we do anywhere at any time is being recorded somewhere. And if that falls into the wrong hands, you know of course there's all sorts of reasons why that could be abused.

And I think therefore it's really important that we also—alongside of these developments—we also think about solutions for this privacy problem and also for security, because if everything is smart and you can hack into all these things and you can make all self-driving cars at one point in time drive into a tree, then of course that's a huge security risk. And so with all this new complexity and new intelligence and in systems, other developments will have to go alongside of it, which is security and privacy issues.

Yeah and upgradability too. A lot of the devices aren't patchable, right? So if somebody finds a vulnerability on my Internet enabled coffee maker, then there's no way actually to remediate that, is there right now?

Well, I would think if that thing is on the internet, then you could patch it, right? I mean it's not necessarily different from your phone in some sense.

Right. Although a lot of them are just wrong. I mean they're just… No, I hear you. It's like the very tools we build to do all kinds of good things like look for cures for cancer are the same tools that can be used to invade privacy. The people who believe a certain thing as well…

Yeah, that's right. Yeah. So that's in general with technology that we develop, if that powerful tool can be used for good things and for bad things depending, and who is using the tool. I mean an axe can be used to cut trees and build houses or it could be used in war. And it's been like that forever. It's no different. It's just that the tools get more powerful and so we have to be really careful with them.

So it sounds like though on balance, and correct me if I'm wrong, you're optimistic about how we're going to use this technology on balance?

Well I’m optimistic. I mean again, I think we have to be… we cannot be naive about it. So in a sense that if it can be abused, it will be abused, right? And I think if you look around the world, certain governments are abusing this type of technology already. And so you cannot really trust  that it will be all right. So you actually have to build it into the system that it cannot be abused before you deploy it. And so I am a big fan of making… I'm optimistic about all the good things that all these technological developments can bring us. At the same time, I want to be cautious and say “Don't leave it to chance. Make sure when we roll out these things it's safe, it's private, and it cannot be abused.”

Well I think it's a fantastic place to leave our conversation. Powerful technology with a lot of ability to do good, but a cautionary warning that it can also equally easily be abused. So Max, how do people follow you if they want to keep up with your thoughts? You're obviously a very thoughtful guy who's thought about all of these things. Do you post to social media? Do you write? Do you have a blog? Or how can people keep up with you?

Well that's a really good question. I don't tweet all that much. I may need to tweet more. I'm active on Facebook too now and then. I do put things on Facebook. You know Qualcomm sometimes writes blogs about our latest research. We have some really cool research that's happening right now. We have a new research entity called Qualcomm AI Research where we publish our latest research and papers and at conferences.

You can find our latest work on the Qualcomm AI Research web page, where you can download the papers and there will be blogs. We will have animated videos which try to explain some of the things that we are doing. So there is that. For my university job I have a website. It's not super active, but there is a website.

Well everything that you just mentioned in the transcript will be turned into a hyperlink. So if somebody is listening to it, then just go to www.VoicesinAI.com and go to the end of the transcript and the links to the animations and to the websites you reference. Max, I want to thank you so much for your time. It's been a fascinating near hour.

Well thank you very much. It was great to talk to you.