Episode 42: A Conversation with Jem Davies

In this episode, Byron and Jem discuss machine learning, privacy, ethics, and Moore's law.

-
-
0:00
0:00
0:00

Guest

Jem is vice president, general manager and fellow of technology for ARM’s Media Processing Group. In addition to setting the future technology roadmaps for graphics, video, display and imaging, he is also responsible for technological investigations of a number of acquisitions. Jem has previously been a member of ARM’s architecture review board and holds four patents in the fields of CPU and GPU design. He holds a degree from the University of Cambridge.

Transcript

Byron Reese: Hello, this is “Voices in AI,” brought to you by GigaOm, I am Byron Reese. Today my guest is Jem Davies, he is a VP and a Fellow and the GM of the Machine Learning Group at ARM. ARM, as you know, makes processors. They have, in fact, 90–95% of the share in mobile devices. I think they’ve shipped something like 125 billion processors. They’re shipping 20 billion a year, which means you, listener, probably bought three or four or five of them this year alone. With that in mind, we’re very proud to have Jem here. Welcome to the show, Jem.

Jem Davies: Thank you very much indeed. Thanks for asking me on.

Tell me, if I did buy four or five of your processors, where are they all? Mobile devices I mentioned. Are they in my cell phone, my clock radio? Are they in my smart light bulb? Where in the world have you secreted them?

It’s simplest, honestly, to answer that question with where they are not. Because of our position in the business, we sell the design of our processor to a chip manufacturer who makes the silicon chips who then sell those on to a device manufacturer who makes the device. We are a long way away from the public. We do absolutely have a brand, but it’s not a customer brand that people are aware of. We’re a business-to-business style of business, so we’re in all sorts of things that people have no idea about, and that’s kind of okay by us. We don’t try and get too famous or too above ourselves. We like the other people taking quite a lot of the limelight. So, yeah, all of the devices you mentioned. We’ll actually probably even be inside your laptop, just not the big processor that you know and love. We might be in one of the little processors perhaps controlling, oh, I don’t know, the flash memory or the Bluetooth or the modem if it’s an LTE-connected device. But, yes, your smartwatch, your car, your disc drives, your home wireless router, I could go on until you got seriously bored.

Tell me this. I understand that some of the advances we’ve made in artificial intelligence recently are because we’ve gotten better at chip design, we do parallelism better—that’s why GPUs do so well is because they can do parallel processing and so forth—but most people when they think of machine learning, they’re thinking about software that does all these things. They think about neural nets and back propagation and clustering and classification problems and regression and all of that. Tell me why ARM has a Machine Learning Group or is that wrong that machine learning is not just primarily a software thing once you have kind of a basic hardware in place?

Oh, there are about three questions there. See if I count to three. The first is the ways in which you can do machine learning are many and varied. The ways even that these things are implemented are quite disparate. Some people, for example, believe in neuromorphic hardware designs, spiking networks, that sort of thing. The predominant use of neural nets is as software, as you say. They are software emulations of a neural network which then runs on some sort of compute device.

I’m going to take issue with your first question, which was it’s all about Moore’s Law. Actually, two things have happened recently which have changed the uptake. The first is, yeah, there is lots and lots of compute power about, particularly in devices but, also, the ready access to vast quantities of data contained in the environments in which people do the training. And perhaps here I should start by saying that we view training and inference as computationally completely separate problems. So, what we do at ARM is we do computing. What does computing get done on? It gets done on processors, so we design processors, and we try to understand, to analyze—performance analyze, measure bottlenecks, etc.—the way in which a particular compute workload runs on a processor.

For example, originally we didn’t make GPUs—graphics processors—but along comes a time in which everybody needs a certain amount of graphics performance. And whilst it is a digital world, it is all just ones and zeroes, you would never do graphics on a CPU. It doesn’t make sense because of the performance and the efficiency requirements. So we are all the time analyzing these workloads and saying, “Well, what can we do to make our general-purpose CPUs better at executing these workloads, or what is the point at which we feel that the benefits of producing a domain-specific processor outweigh the disadvantages?”

So with graphics it’s obvious. Along comes programmable graphics, and, so, right, you absolutely need a special-purpose processor to do this. Video was an interesting case in point, digital video. MPEG-2 with VGA resolution, not very high frame rate, actually you can do that on a CPU, particularly decode. Along comes the newer standards, much higher resolution, much higher frame rate, and suddenly you go, oh, there is no way we can do this on a CPU. It’s just too hard, it takes too much power, produces too much heat. So we produced a special-purpose video processor which does encode and decode the modern standards.

So, for us, in that regard, machine learning neural network processors are in a sense just the latest workload. Now, when I say “just” you could hear me wave my hands around and put inverted commas around it, because we believe that it is a genuinely once-in-a-generation inflection in computing. The reason for that is practically every time somebody takes a classical method and says, “Oh, I wonder what happens if I try doing this using some sort of machine learning algorithm instead,” they get better results. And, so, if you think of a sort of pie chart and say, well, the total number of compute cycles spent is 100%, what slice of that pie is spent executing machine learning, then we see the slice of the pie that gets spent executing machine learning workload, particularly inference, to be growing and growing and growing, and we think it will be a very significant fraction in a few years’ time.

And one of the things, as I said, that 125 billion chips, is all of these devices at the edge. Yes, there are people doing machine learning today in data centers, and typically training is done next to these vast quantities of training data which tends to exist in hyper-scale data centers, but the inference of the machine learning is most useful when done right next to the test data. And if, for example, you’re trying to recognize things in video, computer vision, something like that, the chances are that camera is out there in the wild. It’s not actually directly connected to your hyper-scale data center.

And so we see an absolute explosion of machine learning inference moving to the edge, and there are very sound reasons for that. Yes, it’s next to the data that you’re trying to test, but it’s the laws of economics, it’s the laws of physics and the laws of the land. Physics says there isn’t enough bandwidth in the world to transmit your video image up to Seattle and have it interpreted and then send the results back. You would physically break the internet. There just isn’t enough bandwidth. And there are cost implications with that as well, as well as the power costs. The cost implications are huge. Google themselves said if everybody used their Android Voice Assistant for three minutes per day then they would have to double the number of data centers they had. That’s huge. That is a lot of money. And we’re used to user experience latency issues, which obviously would come into play, but at the point at which you’re saying, well, actually, rather than identifying the picture of the great spotted woodpecker on my cell phone, I’m actually trying to identify a pedestrian in front of a fast-moving car, that latency issue suddenly becomes a critical reliability issue, and you really don’t want to be sending it remotely.

And then, finally, privacy and security, the laws of the land—people are becoming increasingly reluctant to have their personal data spread all over the internet and rightfully so. So if I can have my personal data interpreted on my device, and if I really care I just have to smash my device to smithereens with a hammer, and I know full well that that data is then safe, then I feel much more comfortable, I feel much more confident about committing my data to that service and getting the benefit of it, whatever that service is. I can’t now remember what your three questions were, but I think I’ve addressed them.

Absolutely. So machine learning, I guess, at its core is let’s take a bunch of this data—which, as you said, our ability to collect it has gone up faster, arguably than Moore’s Law—let’s take a bunch of data about the past, let’s study it, and let’s project that into the future. What do you think, practically speaking, are the limits of that? At the far edge eventually in theory could you point a generalized learner at the internet and then it could write Harry Potter? Where does it break down? We all know kind of the use cases where it excels, but where do you think it’s unclear how you would apply that methodology to a problem set?

Whilst I said that almost every time anybody applies a machine learning algorithm to something they get better results, I think—I’ll use “creative” for want of a better phrase—where the creative arts are concerned, I think there is the hardest fit there. Personally, I have great doubts about whether we have indeed created something intelligent or whether we are, in fact, creating very useful automatons. There have been occasions where they have created music and they have created books, but it tends to be rather pastiche creations or very much along a genre. Personally, I have not yet seen any evidence to suggest that we are in danger of a truly sentient, intelligent creation producing something new.

It’s interesting that you would say we are in danger of that, not we are excited about that.

Oh, sorry. No, that is just my vocabulary.

Fair enough.

I’m not in general very afraid of these things.

Fair enough. So I would tend to agree with you about creativity. And I agree, you can study Bach and make something that sounds passably like it, you can auto-generate sports stories and all of that, and I don’t think any of it makes the grade as being, “creative.” And that’s, of course, a challenge, because not only does intelligence not have a consensus definition, but creativity even less so.

If people had to hold out one example of a machine being creative right now, given today, 2018, they might say Game 3 of the Go tournament between AlphaGo and Lee Sedol, move 37, where he’s in the middle of this game, the computer makes Move 37, and all the live commentators are like, “What?” And the Deep Mind team is scrambling to figure out, like, what was this move? And they look, and AlphaGo said the chances a human player would make that move are about 1 in 10,000. So it was clearly not a move that a human would have made. And, then, as they’ve taken that system and trained it on itself to play itself in games over and over and it plays things like chess, its moves are described as alien chess, because they’re not trained on human moves. Without necessarily knowing a lot of the particulars, would you say that is nascent creativity or is that something that simply looks like creativity, it’s emulating creativity but it isn’t really creativity, or is there a difference between those two ideas?

Very personally, I don’t call that creativity. I just call that exploring a wider search space. We are creatures very much of habit, of cultural norms. There are just things we don’t do and don’t think about doing, and once you produce a machine to do something it’s not bound by any of those. It will learn certainly from your training data, and it will say, “Okay, these are things that I know to work,” but, also, it has that big search space to execute in, to try out. Effectively most machine learning programs when used in the wild for real like that are the results of lots and lots and lots of simulation and experimentation having gone on before, and it will have observed, for example, that playing what we would call “alien” moves are actually a very good strategy when playing against humans.

Fair enough.

And they tend to lose.

Right. So, am I hearing you correctly that you are saying that the narrow AI we have now, which we still have lots to go on and it can do all kinds of amazing things, may be something fundamentally different than general intelligence, that it isn’t an evolutionary path to a general intelligence, but that the general intelligence only shares that one word but is a completely different technology? Am I hearing that correctly or not?

Yes, I think you’re largely hearing it correctly. For someone who makes a living out of predicting technological strategy, I’m actually rather conservative as to how far out I make predictions, and people who talk knowledgeably about what will happen in 10-20 years’ time, I think on the whole, are either braver, or cleverer at making it up than I am, because I think we can see a path from where we are today to really quite amazing things, but I wouldn’t classify them as true intelligence or truly creative.

So, one concern—as you’re building all these chips and they’re going in all these devices—

we’ve had this kind of duel between the black hats and white hats in the computer world making viruses and attacking things, and then they find a vulnerability, and then it’s patched, and then they find another one, and then that’s countered and so forth. There’s a broad concern that the kind of IoT devices that we’re embedding, for instance, your chips in, aren’t upgradeable, and they’re manufactured in great numbers, and so when a vulnerability is found there is no counter to it. On your worry-o-meter how high does that rate, and is that an intractable problem, and how might it be solved in the future?

Security in end devices is something that ARM has taken very seriously, and we published a security manifesto last year where being able to upgrade things and download the latest security fixes and so on was a part of. So we do care about this. It’s a problem that exists whether or not we put machine learning intelligence, machine learning capabilities into those end devices. The biggest problem probably for most people’s homes at the moment is their broadband router, and that’s got no ML capability in it. It’s just routing packets. So it’s a problem we need to address, come what may.

The addition of machine learning capabilities in these and other devices actually, I think, gives us the possibility for considerably more safety and security, because a machine learning program can be trained to spot anomalous activity. So just as if I write a check for £50,000 my bank is very, very likely to ring me up—sorry, for the younger audiences who don’t know what a check is, we’ll explain that later—but it would be anomalous, and they would say, “Okay, that’s not on, that’s unusual.” Similarly, we can do that in real time using machine learning monitoring systems to analyze network data and say, “Well, actually, that looks wrong. I don’t believe he meant to do that.” So, in general, I’m an optimist that the machine learning revolution will help us more than hinder us here.

That raises another point. That same system that said that check was not good is probably looking at a bunch of variables: your history of all of the checks you’ve written in the past, who it was made payable to, where it was, what time of day. There are all these different data inputs, and it makes some conclusion that yea or nay, flag this, don’t flag it. When that same methodology is applied to an auto loan or a home loan or so forth and it says, “Give them the loan, don’t give them the loan,” European law says that the person is entitled to an explanation why it said that. Is that fair, and is that a hindrance to systems where you might look at it and say, well, we don’t know; it flagged it because it looks like other ones that were fraudulent, and beyond that we can’t offer a lot of insight? What are your thoughts on that?

I think this is an absolute minefield, and I’m not going to give you a very sensible answer on this. It is clear that a number of people implementing such systems will want to keep the decision-making process a secret, because that is actually their trade secret. That is their commercial secret sauce. And so actually opening these boxes up and saying, well, it decided to do this because of X, Y and Z, is something that they are not going to want to do.

Equally, with some machine learning systems that are based on learning rather than based on if-then-else rules-based systems, it’s going to be genuinely hard to answer that question. If somebody rings up and says, “Why did you do that?” It is going to be genuinely hard for that service provider, even if they wanted to, to answer that question.

Now, that to me, as a technologist, just answering what is and is not physically possible/hard. Me as a consumer, yes, I want to know. If somebody says, “Well, I think you’re a bad risk,” or “Actually, in life insurance terms I think you’re going to die tomorrow,” I really want to know the answers to those questions, and I think I’ve got a right to be informed about that sort of thing. So, I’m sorry, I’m deeply conflicted on that one.

As I think everyone is. That’s kind of the challenge. It’s interesting to see how it’s going to play out.

On a different note entirely, a lot of the debate around AI and machine learning is around automation and its effect on employment, and, roughly speaking, there are kind of three positions. There is the idea that it’s going to eliminate a bunch of “low-skilled jobs” and you’re going to have some level of unemployment that persists long-term because there just are more people than there are low-skilled jobs. Then there is another camp which says no, no, no, they’re going to be able to do everything, they’ll write better poetry, and they’ll paint better paintings, which it sounds like you’re not part of that camp. And then there is this third camp that says no, no, no, like any technology it fundamentally increases productivity, it empowers people, and people use it to drive higher wages, and it creates more jobs in the future. We saw it with steam and then the assembly line and even with the internet just 25 years ago. What is your thought? How do you think artificial intelligence and machine learning and automation are going to impact employment?

On a global scale, I tend towards your latter view, which is that actually it tends to be productive rather than restrictive. I think that on a local scale, however, the effects can be severe, and I’m of the view that the people it’s likely to affect are not necessarily the ones that people expect. For example, I think that we are going to have to come to terms with understanding, in more detail, the difference between a highly-skilled occupation and a highly-knowledged occupation. So, if we look at what machine learning can do with a smartphone and a camera and an internet connection in terms of skin cancer diagnosis, it arguably puts skin cancer diagnosticians out of a job, which is a bit surprising to most people, because they would regard them as very highly skilled, very highly educated. Typically, somebody in that situation would probably have ten years of postgraduate experience let alone all their education that got them to that point. We see cab drivers and truck drivers being at risk. And yet actually the man who digs a hole in the road and fixes a broken sewer pipe might well have a job, because actually that’s extremely hard to automate.

So I think people’s expectations of who wins and who loses in this procedure are going to be probably somewhat misguided, but I think, yeah, some jobs are clearly at great risk, and the macro-economy might well benefit from some macro-economic trends here, but, as one of your presidents said, the unemployment rate is either 0 percent or 100 percent, depending on your point of view. You’ve either got a job or you haven’t. And so I do think this does bring considerable risks of societal change, but then actually society has always changed, and we’ve gone through many a change that has had such effects. On the whole, I’m an optimist.

So in the U.S. at least, our unemployment rate has stayed between 5% and 10% for 250 years with the exception of the Depression. Britain is not the same exact range obviously but a similar relatively tight band in spite of enormous technologies that have come along like steam power, electricity, even the internet and so forth.

I think both of us have probably exploited such big changes as they’ve been coming along.

Right. And real wages have clearly risen over that 250-year period as well, and we’ve seen, like you just said, jobs eliminated. I think the half-life of the group of jobs that everybody collectively has right now is probably 50 years. I think in any 50-year period about half of them are lost. It was farming jobs at one point, manufacturing jobs at one point and so forth. Do you have a sense that machine learning is more of the same or is something profoundly different?

I’m reluctant to say it’s something different. I think it’s one of the bigger ones, definitely, but actually steam engines were pretty big, coal was pretty big, the invention of the steam train. These were all pretty significant events, and so I’m reluctant to say that it’s necessarily bigger than those. I think it is at least a once-in-a-generation inflection. It’s at least that big.

Let’s talk a little bit about human ability versus machines. So let me set you up with a problem, which is if you take a million photos of a cat and a million photos of a dog and you train the machine learning thing, it gets reliable at telling the difference between the two. And then the narrative goes: and yet, interestingly, a person can be trained on a sample size of one thing. You make some whimsical stuffed animal of some creature that doesn’t exist, you show it to a person and say, “Find it in all these photos,” and they can find it if it’s frozen in a block of ice or covered in chocolate syrup or half-torn or what have you. And the normal explanation for that is, well, that’s transfer learning, and humans have a lifetime of experience with other things that are torn or covered in substances and so forth, and they are able to, therefore, transfer that learning and so forth.

I used to be fine with that, but recently I got to thinking about children. You could show a child not a million cats but a dozen cats or however many they’re likely to encounter in their life up until age five, and then you can be out for a walk with them, and you see one of those Manx cats, and they say, “Look, a cat with no tail,” even though there’s this class of things, cats, they all have tails, and that’s a cat with no tail. How do you think humans are doing that? Is that innate or instinctual or what? That should be a level we can get machines to under your view, isn’t it?

On the one hand I’ll say that a profound area of research which is proving to produce huge results is the way in which we can now train neural networks using much smaller sets of data. There is a whole field of research going on there which is proving to be very productive. Against that, I’ll advance you that we have no idea how that child learns, and so I refuse to speculate about the difference between A and B when I have actually no understanding of A.

And I don’t wish to be difficult about this, but neuroscientists, applied psychologists combined, there is some deep understanding of biochemistry at the synapse level, and we can extrapolate some broad observed behaviors which make it appear as though we know how people learn, but there are enough counter-examples to show that we simply don’t understand this properly. Neuroscience is being researched and developed just as quickly as machine learning, and they need to make a lot of progress about understanding how the brain works in reality. Up until that point I must admit where my colleagues, particularly those in the marketing department, start talking about machine learning reflecting how the brain works, I get itchy and scratchy, and I try to stop them.

I would agree. Don’t you even think that neural nets, even the appeal to that metaphor is forced?

Yes, I dislike it. If I had my way I would refer to neural networks as something else, but it’s pointless, because everybody would be saying, “What? Oh, you mean a neural network.” That ship has sailed. I’m not picking that fight. I do try and keep us on the subject of machine learning when we speak publicly as opposed to artificial intelligence. I think I might be able to win that one.

That’s interesting. So is your problem with the word ‘artificial’, the word ‘intelligence’ or both?

My problem is the word ‘intelligence’ when combined with ‘artificial’ which implies I have artificially created something that is intelligent, and I know what intelligence is, and I’ve created this artificial thing which is intelligent. And I’m going, well, you kind of don’t know what intelligence is, you kind of don’t know what learning really is, and so making a claim that you’ve been able to duplicate this, physically create it in some manmade system, it’s a bit wide of the mark.

I would tend to agree, but there interestingly isn’t a consensus on that interpretation of what artificial means. There are plenty of people who believe that artificial turf is just something that looks like turf but it isn’t, artificial fruit made of wax is just something that looks like fruit but it really isn’t, and therefore artificial intelligence is something that isn’t really intelligent.

Okay. If I heard anyone advance that viewpoint I would be a lot happier with the words “artificial intelligence.”

Fair enough. So would you go so far as to say that people who look at how humans learn and try to figure out, well, how do we apply that in computers, may be similarly misguided? The oft-repeated analogy is we learned to fly not by emulating birds but by making the airfoil. Is that your view, that trying to map these things to the human brain may be more of a distraction than useful?

On the whole, yes, though I think it is a worthwhile pursuit for some section of the scientific community to see if there are genuinely parallels and what we can learn from that, but, in general, I am a pragmatist, I observe that neural network algorithms and particularly the newer kinds of networks are just a generally useful tool, and we can create systems that perform better than classical if-then-else rules-based systems. We can get better results at object recognition, for example, better false positives. They are just generally better, and so I think that’s a worthwhile pursuit, and we can apply that to devices that we use every day to give us a better quality of life. Who hasn’t struggled with the user interface on some wretchedly so-called smart device and uttered the infamous phrase, “What’s it doing now?” because we are completely bewildered by it? We’ve not understood it. It hasn’t understood us. We can transform that, I would argue, by adding more human-like interaction between the real world and the digital world.

So humans have this intelligence, and we have these brains, which you point out we don’t really understand. And then we have something, a mind, which, however you want to think about it, is a set of abilities that don’t seem to be derivable from what we know about the brain, like creativity and so forth. And then we have this other feature which is consciousness, where we actually experience the world instead of simply measuring it. Is it possible that we therefore have capabilities that cannot be duplicated in a computer?

I think so, yes. Until somebody shows me some evidence to the contrary, that’s probably going to be my position. We are capable of holding ethical, moral beliefs that are at variance, often, with our learning of the way things work in the world. We might think it is simply wrong to do something, and we might behave in that way even having seen evidence that people who do that wrong thing gain advantage in this world. I think we’re more than just the sum of our learning experiences. Though what we are, I can’t explain why, sorry.

No, well, you and Plato.

Exactly.

In the same camp there. That’s really interesting, and I, of course, don’t mean it to diminish anything that we are going to be able to do with these technologies.

No, I genuinely think we can do amazing things with these technologies, even if it can’t write Shakespeare.

When the debate comes up about the application of this technology, let’s say it’s used in weapon systems to make automated kill decisions, which some people will do, no matter what—I guess a landmine is an artificial intelligence that makes a kill decision based on the weight of an object, so in a sense it’s not new—but do you worry, and you don’t even have to go that extreme, that somehow the ethical implications of the action can attempt to be transferred to the machine, and you say, well, the machine made that call, not a person? In reality, of course, a person coded it, but is it a way for humans to shirk moral responsibility for what they build the machines to do?

All of the above. So it can be a way for people to shirk responsibility for what they do, but, equally, we have the capability to create technologies, tools, devices that have bad consequences, and we always have done. Since the Bronze Age—arguably since the Stone Age—we’ve been able to create axes which were really good at bringing down saber-toothed tigers to eat, but they were also quite useful at breaking human skulls open. So we’ve had this all along, you know, the invention of gunpowder, the discovery of atomic energy, leading to both good and bad.

Technology and science will always create things that are morally neutral. It is people who will use them in certain ways that may have good or bad morality is my personal view. But, yes, I think it does introduce the possibility for less well-controlled things. And it can be much less scary. It may not be automated killing by drone. It may be car ADAS systems, the traditional, sort of, I’ve got to swerve one way or the other, I am unable to stop, and if I swerve that way I kill a pensioner, if I go that way I kill a mother and baby.

Right, the trolley problem.

Yeah, it is the trolley problem. Exactly, it is the trolley problem.

The trolley problem, if you push it to the logical extreme of things that might actually happen, should the AI prevent you from having a second helping of dessert, because that statistically increases, you know? Should it prohibit you from having the celebratory cigar after something?

Let’s talk about hardware for a moment. Every year or so, I see a headline that says, “Is it the end of Moore’s Law?” And I have noticed in my life that any headline phrased as a question, the answer is always, no. Otherwise that would be the headline: “Moore’s Law is over.”

“Moore is dead.”

Exactly, so it’s always got to be no. So my question to you is are we nearing the end of Moore’s Law? And Part B of the same question is what are physical constraints—I’ve heard you talk about you start with the amount of heat it can dissipate, then you work backward to wattage and then all of that—what are the fundamental physical laws that you are running up against as we make better, smaller, faster, lower-power chips?

Moore’s Law is, of course, not what most people think it was. He didn’t actually say most of the things that most people have attributed to him. And in some sense it is dead already, but in a wider applicability sense, if you sort of defocus the question and step out to a further altitude, we are finding ways to get more and more capabilities out of the same area of silicon year on year, and the introduction of domain-specific processors, like machine learning processors, is very much a feature of that. So I can get done in my machine learning processor at 2 mm2what it might take 40 mm2of some other type of processor.

All of technology development has always been along those lines. Where we can find a more efficient way to do something, we generally do, and there are generally useful benefits either in terms of use cases that people want to pay for or in terms of economies where it’s actually a cheaper way of providing a particular piece of functionality. So in that regard I am optimistic. If you were talking to one of my colleagues who works very much on the future of silicon processors, he’d probably be much more bleak about it, saying, “Oh, this is getting really, really hard, and it’s indistinguishable from science fiction, and I can count the number of atoms on a transistor now, and that’s all going to end in tears.” And then you say, well, okay, maybe silicon gets replaced by something else, maybe it’s quantum computing, maybe it’s photonics. There are often technologies in the wings waiting to supplant a technology that’s run out of steam.

So, your point taken about the misunderstanding of Moore’s Law, but Kurzweil’s broader observation that there’s a power curve, an exponential curve, about the cost to do some number of calculations that he believes has been going on for 130 years across five technologies—it started with mechanical computers, then to relays, then to tubes, then to transistors, and then to the processors we have today—do you accept some variant of that? That somehow on a predictable basis the power of computers as an abstraction is doubling?

Maybe not doubling every whatever it used to be, 18 months or something like that, but through the use of things like special-purpose processors like ARM is producing to run machine learning, then, yeah, actually, we kind of do. Because when you move to something like a special-purpose processor that is, oh, I don’t know, 10X, 20X, 50X more efficient than the previous way of doing something, then you get back some more gradient in the curve. The curve might have been flattening off, and then suddenly you get a steepness increase in the curve.

And then you mentioned quantum computing. Is that something that ARM is thinking about and looking at, or is it so far away from the application to my smart hammer that it’s—?

Yeah, it’s something we look at, but, to be honest, we don’t look at it very hard, because it is still such a long way off. It’s probably not going to bother me much, but there are enough smart people throwing enough money at the problem that if it is fixable, somebody will, particularly with governments and cryptography behind it. There are such national security gains to be made from solving this problem that the money supply is effectively infinite. Quantum computing is not being held back by lack of investment, trust me.

So, final question, I’m curious where you come down on the net of everything. On the one hand you have this technology and all of its potential impact, all of its areas of abuse and privacy and security and war and automation, well, that’s not abuse, but you have all of these kind of concerns, and then you have all of these hopes—it increases productivity, and helps us solve all these intractable problems of humanity and so forth. Where are you net on everything? And I know you don’t predict 20 years out, but do you predict directionally, like I think it’s going to net out on the plus side or the minus side?

I think it nets out on the plus side but only once people start taking security and privacy issues seriously. At the moment it’s seen as something of an optional extra, and people producing really quite dumb devices at the moment like, oh, I don’t know, radiator valves, say, “Oh, it’s nothing to do with me. Who cares? I’m just a radiator valve manufacturer.” And you say, well, yeah, actually, but if I can determine from Vladivostok that your radiators are all programmed to come on at this time of day, and you switch the lights on, and you switch the lights off at this time of day, I’ve just inferred something really quite important about your lifestyle.

And so I think that getting security and privacy to be taken seriously by everybody who produces smart devices, particularly where those devices start to become connected and forming sort of islands of privacy and security, such that you go, “Okay, well, I’m prepared to have this information shared amongst the radiator valves in my house, I’m prepared to share it with my central heating system, I’m not prepared to send it to my electricity company,” or something like that, intersecting rings of security, and people only have the right to see the information they need to see, and people will care about this stuff and control it sensibly.

And you might have to delegate that trust. You might have to delegate it to your manufacturer of home electronics. You can say, okay, well, they’re a reputable name, I trust them, I’ll buy them, because clearly most people can’t be experts in this area, but, as I say, I think people have to care first, at which point they’ll pay for it, at which point the manufacturers will supply it and compete with each other to do it well.

All right. I want to thank you so much for a wide-ranging hour-long discussion about all of these topics, and thank you for your time.

Thank you very much. It was fun.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.