Episode 3: A Conversation with Mark Rolston

In this episode, Byron and Mark talk about computer versus human creativity, connectivity with digital systems, AGI, and the workforce.

:: ::


Mark Rolston, Cofounder and Chief Creative Officer of argodesign, is a renowned designer with a 25-year career of creating for the world’s largest and most innovative companies. An early pioneer of software user experience, Mark helped forge the disciplines around user interface design and mobile platforms. A veteran design leader, innovator and patent holder, he is one of Fast Company’s Most Creative People, and he was nominated as Fast Company’s World’s Greatest Designer of 2014.


Byron Reese: This is Voices in AI, brought to you by Gigaom. I'm Byron Reese. Today, our guest is Mark Rolston. He's the co-founder and Chief Creative Officer of argodesign. He's a renowned designer with a twenty-five year career of creating for the world's largest and most innovative companies.

An early pioneer of software user experience, Mark helped forge the disciplines around user interface design and mobile platforms. A veteran design leader, innovator, and patent holder, he is one of Fast Company's Most Creative People, and he was nominated for Fast Company's World's Greatest Designer in 2014.

Welcome to the show, Mark!

Mark Rolston: Yeah, welcome, thanks!

I want to start off with my question that I ask everybody. So far, no two answers have been the same. What is artificial intelligence?

Oh, god, what is AI? Big question, okay.

I think it's probably easy to start with what AI isn't, especially given all the attention that it gets right now. Certainly, every time the topic of AI comes up for me, especially with, let's say, my family around me, the expectation is that it's somehow on the level of another fully-living, breathing person—that level of cognition. I think that, every time we want to talk about AI, when we go immediately to that, the idea of a fully-competent mind, we really lose sight of what AI is and what it's valuable at.

I also think in terms of so much marketing that's going on, where everyone wants to place AI at the front of their product, say that it's powered by AI, I think that while the world's software has gotten a lot better in terms of applying rich data like historical behavior data—you know, you continue to rent movies of this type, then maybe you would like this next movie—or rich algorithms—understanding how to optimize a path home—those things have made software a lot more "intelligent," but those things are not AI.

For me, I think of it as a spectrum of capabilities that transcends that basic sort of rich data and algorithmic intelligence that software has. To where AI can take a cognitively-complex situation that involves context, that involves ongoing computational value—meaning it's not simply answering algorithmically or data-based queries immediately, but it can understand something over the course of time. Let's say, like a habit that somebody has of doing something, or a large set of medical records—to be able to resolve that against immediate context and come up with a conclusion.

I think one of the things, anecdotally, that I tend to do to help people get away from the idea of the sort of Terminator notion of AI, or the 2001 HAL notion of AI, is to ask them to liken it to a two-year-old in intelligence, or maybe even a one-year-old. Except that this one-year-old has, let's say, every medical record in a tristate area available to it, and can sift through it and find consistent cases and conditions and give you back an answer. Or it can understand every stock fluctuation for a particular stock or industry instantaneously, and give you some thoughtful ideas about that. It still, on other bases, may be a complete idiot. It can't tie its own shoes. It still wets the bed. It's still a very simple system.

And so, I think that helps me, helps others, sort of get away from the idea of talking about AI in general terms. Certainly, one day, we'll get to general AI. I expect we will, but right now to talk about that is incredibly distracting from some of the real practical things that are happening in AI.

Well, help me understand a distinction you're making. You explicitly said the program that guides my car to where I'm going—routing—isn't artificial intelligence. Even though it knows the context of where I am, it might have a real-time traffic feed and all of that. And yet, presumably, you think something like IBM Watson – which is able to go through a bunch of cancer journals and recommend treatments for different types of cancer, is a form of artificial intelligence.

Assuming that that is the case, what's the essential difference between what's going on in those two cases?

I think it's just the level of overall complexity, and the ability to apply those subsystems to other problems. You know, a mapping system is maybe algorithmically-rich, but it's really just applied to one problem.

Now, of course, if you used Watson to apply to a mapping problem, then we might call that AI. I think it gets academic, but I'd say the simple answer to your question is: It's a level of richness and sophistication, and the complexity of the data sets and the models we're bringing to the problem.

You used the phrase "it understands something over time" about the artificial intelligence. Is that useful? Do you actually think the computer "understands" anything?

Oh, I know. We use that—sorry—and we're going to use that language because it's readily at hand, and it's a frame of human understanding. But, no, of course, it doesn't understand it. It's just able to prepare a set of variables that it can apply further in its course of "thinking." But, of course, thinking is processing in this case.

So, no, it doesn't understand any more than a bug understands the greater world around it. It just can see in front of it.

We're going to get out of the definitional thing here, but it's really telling to me in the sense that... Do you think that the word "artificial" means that it's like artificial turf? It's not really grass; it looks like it. It's not really intelligent.

Yeah, this has been an interesting line of questioning, and I'm probably terrible at answering this... But I think it's fun to maybe step outside of this technical boundary, and try and start from a philosophical angle the other way, and break down the notion of intelligence, given the choice of the phrase "artificial intelligence." I do believe very much that human intelligence is—while on a great many orders, more complex—it is no more different than the basic processing systems we're discussing.

In that sense, yes, the term is perfectly appropriate. Yet, on a conversational basis, it's very distracting to talk about it that way. Actually, in our studio and with my current active client in this space, we really talk about it as a cognitive system. And that, I know in a lot of ways, is just wordsmithing. But it helps break away from the burdensome history of the term "artificial intelligence" and the greater philosophical demands put on the term.

So for us, a cognitive system has some of the basic tenets of a thinking process, namely: that it's complex in its ability to process information, and it is able to resolve questions over time. Those two are the most interesting factors that make it transcend normal software.

But the idea that human intelligence and machine intelligence... What I think I just heard you say is that they are the same substance, as it were. The machine intelligence is just one quintillionth as much as human right now. Is that what you're saying?

Exactly. In fact, we came across an idea that lends itself to this line of thinking—and, certainly, if you're religious, or if you're a philosopher, it's easy to find this a repulsive notion—called the "bicameral mind."

Of course, yes.

Yes, bicameralism. It's a really interesting idea, just the notion—

—Yes, that we didn't use to be conscious three thousand years ago. It's the notion that one half of the brain spoke to the other, which we perceived as the voice of God. And then, over time, they merged and we became conscious. And then, we felt that we were lacking something, that the gods no longer spoke to us. Therefore, we created oracles and prayer, and all of these ways to try and reclaim that. I guess the people that believe that, talk about Homer and say that he didn't have introspection and all of that. Just framing it for the listener, but go ahead.

Yes, there's this historical idea of bicameralism, where we heard voices in our head, and those voices we attributed to external forces. It shows how fragile the mind is, first of all, and that's why I find it applicable to this question. It shows how the mind isn't a sort of perfect, immutable structure. It hears itself and might mistake it for something else —whole cloth.

And by the way, the reason the topic came up for us was not for this philosophical reason, but because we're seeing a sort of new bicameralism emerge. It's highly-connected to this question of AI, but it's somewhat a digression. But I'll share it anyway.

Today, we're experiencing digital systems that are, in increasingly-sophisticated ways, thinking for us. They're helping us get home. They're helping advise us on who to call right now and what to do right now, and where that thing is I forgot where it was. You know, I can ask Siri or Alexa just about anything, so why remember so many things that I used to have to remember? In any case, I have this sort of external mind now.

Just like, historically, we had this idea that that other voice was not really our own. These digital systems that are extensions of us—like Facebook, they have deep properties that we helped to imbue them with about us—we think of them as very external forces right now. They are Facebook. They are Siri. Yet, increasingly, we depend on them in the same way that we depend on our own internal conscience or our own internal voices.

Eventually, I think, much like we came to have a unified mind, the digital systems that we depend on—largely, we're talking about these intelligent systems, these artificial intelligence assistants that are emerging around us for everything—will become one with us. And I don't mean that in some sci-fi way. I mean, in the sense that when we think about our identity—"Who am I? How smart am I? What am I best at? What do I know the most of? Am I funny? Am I clever? Am I witty?"—anything like that will be inseparably connected to those digital systems, that we tie to us, that we use on a frequent basis. We'll be unable to judge ourselves in any sort of immutable way of flesh and blood; it will be as this newly-joined cyber-creature.

To me, that again spells out more and more that the idea of our own cognition—our own idea of what does it mean to be intelligent as a human, sort of natural intelligence—isn't that magically different. It is entwined with not only the digital systems we subscribe to, but these digital systems are drawing on the same underlying basis of decision-making and context-forming. Like you said, they're just one-quintillionth the level of sophistication.

If you have a computer, and you put a sensor on it that can detect temperature, and then you program it to play a WAV file of somebody screaming if that sensor ever hits five hundred degrees, and then you hold a match to the sensor, it hits five hundred degrees, the computer starts screaming—the computer can sense the temperature, in the sense that it can measure the temperature.

I guess that is the better way to say it. It can't actually sense it; it can measure the temperature. With a human, you put a match on a human's finger and they scream, there's something different going on. A human is experiencing the world, and human intelligence arguably comes from that, whereas a machine doesn't have any experience of the world. Therefore, it seems to be an entirely different thing.

But you just described a dirty shortcut to all of the underlying context and calculations that go on in the human mind before that scream. You just plugged the scream right into the sensor.

If we try and break down the human system, there was an initial sensor—the skin—and there was the initial output design—the scream. There were many, many more computations and while, yes, the two external net results were the same between the two systems, you just obfuscated or ignored all of the other things that might cause a difference.

For example, am I drunk, and I put my hand over the flame and I don't notice in time? Or, am I masochistic, and I'm doing it just to prove I can; so I'm running a deep calculation in my mind to hold onto that scream. And so, then I've got other sensors that start going off, or other external signals like sweat and grimacing and on and on and on.

In a lot of ways, you're still talking about a computational input-output scenario, but there are just so many more variables. We start to dismiss that it is yet still the same kind of computational structure, and we think of it as more magical or something else, and I don't think so. I think it is just a massively more complex computer. And, I think, when you look at some of the things DARPA is doing, they're starting to uncover that.

Here's an interesting example. I went to a DARPA workshop around basically analog-to-digital IO. Which, what they really meant was, how do we build computers that can plug into the human body? One of the things they showed off was some early lab work, with embedded sensors into the speech center of the mind, and asked people to say words out loud. They said, "Hello!" And they got a set of neurons firing off against the statement, "Hello."

But then, they asked the person to think of the word "hello" using their internal voice. Lo and behold, the signals were very similar. In other words, they could read your mind. You could, using your internal voice, think "hello" and give the computer the same input. We were able to decipher what the human mind was doing through some sensors.

Now, it's early, very rough, very sort of brute force, and there's a whole other subject about how much we'll ever really be able to wire up to the mind. Simply because, at the end of the day, it's a three-dimensional structure and, if you have to put leads on it, there's no way you can wire into it effectively. You end up destroying the thing you're trying to read. But, in these simple tests, it sort of proved how much the human brain itself was even usable as almost like a USB port. It's fascinating.

I'm still not drawing the same analogy. I'm emphatically not saying something in your word "magical" is going to on. I think what I'm trying to capture is the sense that the person experiences something. The person has an experience. There is a "self" that experiences the world. Let's just say "consciousness" for that matter. It is an aware Self that exists, that has these experiences with the world.

But isn't consciousness some form of playback.

Let me set this up for you, and maybe we can go from there. There's a classic problem of this by Frank Jackson called "The Problem with Mary." And the setup is that there's this person named Mary who knows everything about color, like, everything—God-like knowledge of color—everything about photons, everything about cones in your eyes, every single thing there is to know about color.

The setup is that she, however, has lived her whole life in a room, and has only ever seen black and white. One day, she goes outside. She sees red for the first time. The question is, did she learn something new? In other words, is experiencing something different than knowing something? I think most people would say, "Yes, she'd never experienced color, and therefore she learned this new thing." A machine cannot experience the world, and therefore one would argue perhaps its intelligence is not anything like ours.

Yes, it's a fascinating example. But wouldn't seeing the red for the first time be essentially the first time you've encoded that? So, same thing with a computer, it hasn't seen red, so therefore it hasn't encoded it. It's not part of its data set. It encodes it for the first time, and it has to place that in context to the rest of its knowledge system. I don't know how we couldn't still codify it in the way that I'm describing.

Well, let's go on to consciousness.

This is great! It's a nice philosophical track you're running.

Well, the reason I do it, it isn't supposed to be like senior year college, up late at night with your friends. The thesis of mine is that all of this stuff that people have talked about—you know, angels dancing on pinheads—for all these thousands of years; we're all about to know. It all matters. We're about to build systems, and if the system one day says, "I feel pain," does it or doesn't it?

I'll try and blow up even the whole presumption. I don't think it matters.

Well, I think it matters to it.

Well, I'm going to argue that we're unlikely to arrive at a machine where we either would ever hear it say, "I feel pain," or we would care. Because, if it can arrive at that level of sophistication, it will likely have surpassed us in its utilization and its role, and therefore won't be offering those human-like analogies. It will be offering other kinds of fail information, other kinds of sensor alerts that won't be familiar to us as flesh objects.

Fear, pain—those are things that are very clear illustrations of, you might say, fault conditions a human encounters. Fear is a prediction of bodily harm. Pain is the actual reflection of body harm. But the body of an AI system either doesn't exist in a meaningful way, or it's just not going to be the way we're interfacing with it. In other words, there's someone else who's concerned about the uptime of the machine, and we, interfacing with that AI system, will never encounter those factors. So we won't encounter these human-like moments of reflection with them. Instead, we'll encounter its impression of us.

To me, it's much more interesting to think about how will they understand us, and what's dangerous or enlightening about that. To me, it is the idea that, rather than these moments that are very human-like, the idea that it's superhuman... Where, let's say, it's talking to a doctor, and it knows the records of every single human being in the United States and, therefore, can come out with presumptions about someone's pain in their knee—our pain, in this example, that the doctor has no understanding of how it's coming out with this conclusion.

Unless it is, of course, a very familiar conclusion. But that's going to be boring, and that's not going to be a moment we're going to reflect on; instead it's just going to reaffirm our own intelligence. But there are going to be these other moments where it comes out with something we never expected, or we thought was absolutely wrong.

Think politics. "Here's the best tax structure for the United States." You know, politics is all about all kinds of decisions that are abhorrent to people. But, if a computer comes out with something that's very non-intuitive, yet is influenced by a one-million-x level of background calculation—you know, something humans just could not do—we won't know how to deal with that. That, to me, is the disconnect, that human-to-AI reflection that's more interesting than what is their pain like versus our pain.

Does that make sense? I know that was kind of a huge digression.

No, I'm happy to engage it because you seem to be saying that we don't really have to worry about the issue of machine pain until we get an AGI, and we're not going to have an AGI for so long.

Even an AGI.

There are those who argue that the Internet may be conscious already. If consciousness is an emergent property—it comes out of complexity—then there could be latent consciousness already in machines.

There could be, but that's like, to me, the way I think about the question of God. It's silly to think of God as just a greater human-like thing. If there were a God, it wouldn't be thinking the way we think. And so, the question of, "Is he mad at me for doing this?" is a silly question. In the same way the idea of, "Is the Internet conscious?"

It may be, in fact—in some definable way—conscious, but beyond the philosophical question, it's not that important.

Again, these questions of a general AI thinking along our lines, I don't think is as important as, "How will they understand us, and how will we interface with them?" Because that's the scary part: They will be a million times more intelligent than us on particular topics, yet maybe dangerously ignorant on adjacent topics, and that's what keeps me up.

I would love to discuss that next. I will just say that, up until the ‘90s in the United States, a veterinarian's standard of care, I am told, was to operate on animals without anesthesia, because the thesis was they couldn't feel pain. Open-heart surgery was done on babies in the United States up until the ‘90s with no anesthesia, because the thinking was their brains weren't developed enough to feel pain.

I think it behooves us to reflect on those things, to say perhaps we weren't thinking all that through at the time, that those were easy questions to avoid out of convenience. Look, I don't know if plants are conscious; I have no idea. I'm just asking the question, "How would you know?" In the end, all of these are really questions about us, right?

In the end, the question that all of this reveals is, "Are we machines?" And, if we're machines that experience the world, if that's all we are, then are the machines that we make experiencing the world? That's the question I'm trying to wrap my head around. I don't know that it's premature, as you were saying it is.

Because, if I'm hearing you correctly, you're saying that by the time it matters, the vocabulary will have changed so much, and the world would be so different—and it would be so different that these questions are going to seem childishly naïve and simple and provincial.

Maybe not childish. I don't really mean that. They just won't be, and I don't think they should be, our primary concerns. To me, the idea of how do we interface with a growing set of machines that are smarter than us? How does society—where you-to-me or me-to-my-neighbor interface today on fairly normalized terms, you know, something that took thousands of years to break through to a more democratic, fair society—how do we continue to interface when we may, in the future, have asymmetrical access to these super machines? Machines that not just help us get to work a little quicker than the next guy using Waze, but have a million times more intelligence, or a million times more financial wit as an investor than the next guy.

How do we deal with normalization of intelligence? When I make a decision and you make a decision, society benefits from the discord. Invention, and fashion, and greater advancements in society happen from those disagreements because, ideally, the better ideas break through. The bad ideas are tested and so forth. But, when we grow to be dependent on these systems and we all start to use the same system, you start to imbue society with the sort of same line of thinking, and there's a friction to breaking free from that which is super interesting.

Let's just take driving to work, since we're using that example: The friction to drive your own path, versus what the map is telling you to do in a new city, is pretty high. If you were to take that kind of quantification and move that to everything you eat, the jobs you take, the people you date, the friends you associate with, and just about every little thing—you know, Amazon is trying to help you get dressed in the morning, "Does this look good on me?"—it's fascinating.

It not only grows a dependence on a set of proprietors, you know, the people who are behind these systems, but a dependence on each other in decisions that might normalize in a way we don't want, and that isn't good for society. That's, to me, the truly exciting space because, again, these questions end up being about us when we're imbued with AI—as opposed to AI itself, and will it feel pain. I guess I'm much less concerned about that guy versus the humanity behind it.

Fair enough. Let's do this: Let's chat about jobs in the near future, because I think that'll set up the context for this conversation which you're talking about. Which is, when one segment of society can make better decisions than the other, and those better decisions compound, how do you deal with that?

Let's start with just the immediate future. There's three views about how automation and robots and AI are going to affect the job market, just to set this up again. The first is that there's going to be a permanent group of people that are unprepared to add economic value in the world of tomorrow, and we're going to be in this permanent Great Depression, where some sizeable number of people can't get jobs or can't earn a living wage.

And then, there's a second one that says, "Oh, no, no! You don't understand, it's worse than that—every single person on the planet, everybody, every job in the world can be replaced. Don't think you're something special. Machines are just going to zoom right past you, too."

And then, there's a third one that says, "No, no, no, this is all ridiculous. Every major technological event since the Industrial Revolution—even ones arguably as big as AI, like electricity and mechanical power—all they have done is boost the productivity of humans. Which means it boosts everybody's income. We all get wealthier, and everybody is better off, and that's the entire story of why you live a better life than your great-great-grandparents, but you don't work nearly as hard as they do."

Which of those three camps, or a fourth one, do you fall into? How do you see that all shaping out?

All of the above, and I don't mean to chicken out, but just very asymmetrically.

What I believe is that the net product will ideally be a better society, if we don't blow ourselves up in the process. So, with that caveat, I think we are headed toward a much more ideal future. However, I think, in the short term, we're in for a really ugly shakeup where AI will displace a great amount of the population. A great deal of the population is not prepared, and even some of that population is not capable of moving up past a manual labor world. The pessimist in me says, there aren't that many creative jobs, and the most suspect, immediately replaceable jobs will be manual labor.

Hold on. I want to challenge that point. I don't know that that's demonstrable at all. Even if they make a robot tomorrow that can mow a yard, and everybody who mows a yard is out of work, they didn't make a robot that can plant a grape arbor. Even when they make a grape arbor robot, they didn't make a robot that can plant historically-accurate gardens.

You know, my plumber and my electrician, they have to be so dynamic that they come out and they have to figure out, "Hmm... What do I do with this?" and all of that. I don't see a robot painting a curb. I'm looking out my window right now and there are like four hundred things that need doing out there, and I don't know if a robot can do any of them.

Yeah, okay, fair enough. I was going to get to this. I think the actual twist to the story is... The presumption is, yes, robotics with AI could replace everything. But, and like you started to suggest, the trick in that—and Uber, to me, is the sort of leading example to that—the introduction of AI—or intelligent software, because I don't think you need the full suite of AI to get there—in society usually means that we end up working for machines from the middle to the bottom of the job structure in society.

When I look at Uber, if you step back from it, it's basically humans are the sort of last vestigial robot in the chain. They're being told by a piece of software where to drive. The money is being taken, all the commercial exchange is by the software. The human is just the cheapest technical means of driving the machine around. And I think we look at a lot of labor—all of your examples, the plumber—software will increasingly take out the creative factors in those businesses. But the manual part of it, the sort of robot, rather than devising a humanoid robot to send in to do your plumbing, will be humanity.

A trip to Japan can show you what it looks like when you have this really large population that is, in essence, sort of overemployed. In India, I was visiting a client, and there was somebody opening the door to the building, there was somebody literally there opening the door to the bathroom, and there was somebody there to hand me a little towel in the bathroom. It felt really weird, and it was a symbol of what happens—and I'm sort of getting off-topic here—when the cost of labor goes down. And technology, in the case of Uber, is fantastic for pushing that cost of labor down.

I don't know if that would be my interpretation of it.

The manual labor picture, I absolutely believe that; but I think there's some sunlight in that process, which is a lot of the jobs today that have been sort of whittled down to "just get it done," things like a plumber, will become more artisanal jobs. We will hire people to do more interesting versions of it.

I think, humanity, in the greater sense of things, has a real knack for taking something that normalizes and almost always blowing it up, either for very bad reasons or good reasons. It just can't help itself to take anything that's stabilized and upset it. You know, you look at the way governments work.

I think the idea of the world of Etsy-based makers or creative technicians will emerge. I think that will help, but I think that still the greater forces are many, many more people performing very robotic jobs.

It would seem just the opposite, right? Like, once you can automate those jobs, you don't, actually.

I guess the analogy people always go to is farming, right? It used to be ninety percent of people farmed, now it's two percent in this country. If you look at that from one angle, you say, "Well, what are all those people going to do? They can't go into factories and learn how to add value."

They did. They went into factories.

Right, and then, they figured out, "Every time you automate something, you lower the cost of it." Who would have ever guessed?

They became marketers and middle management.

Right, 1995, somebody says, "We're going to use a common protocol, hypertext protocol, and we're going to use Ethernet and TCP/IP, and we're going to connect a whole bunch of computers." Who would have ever said, "Okay, that's going to create trillions and trillions and trillions and trillions of dollars of wealth. You're going to make Google. You're going to make eBay. You're going to make Etsy. You're going to make Amazon. You're going to make Facebook." Nobody, right?

They created that much wealth but they haven't distributed it, nor distributed the same amount of labor that their historical counterparts did. In each of these cases, it's required less and less labor.

I definitely believe in the idea of the overall value in the economy and the overall comfort level available to society, but society's ability to distribute that in a way that's fair doesn't have a great track record in the twentieth century.

So, you're arguing that, in the twentieth century, the average standard of living didn't go up?

It did, but the delta between the bottom and the top also got worse.

Well, nobody argues that technology hasn't made it easier to make a billion dollars, at least for some people, not for me. But, that aside, the question is, "Has the median income, the median standard of living of somebody in 1900, 1950, to 2000 gone up?" I mean, that's a straight line.

Of course.

And so, what is it in your mind that is different about 2000 to 2050?

I think, if you look at those lines, the baseline of what is the poorest person living like and what is the wealthiest person living like are no longer following each other.

There's a great photo array showing what the poorest people in Africa and the poorest in the United States live like. Like, where do they sleep? What does a median income look like? It's interesting in that it's gone up from the median incomes upwards in a lot of places in the world. But it's also interesting how poor the poor remain in the United States. That delta is what interests me, and the fact that that line for the lowest income has stopped moving up.

I think looking back gives us some hope, but I don't think it gives us automatic confidence. I don't think it should. I think we should take a warning from the level of income inequality that technology is driving. I don't think it's fair to just assume, "Well, it worked out in the past so it should work out again," because it doesn't seem to be right now. There seems to be some very accelerating forces for those who have access to technology versus those who don't.

In every technology—you said it right—when electricity came out, we thought, "This is going to be different. This is something to be concerned about." And, yes, I may be one of those voices, and I hope to be wrong, who's saying, "With AI, I think this is going to be different and we need to be very concerned about it."

Well, let's assume, and put a pin in everything you're saying, and say it's all absolutely right, and it's all going to unfold exactly that way. With that context, let's get to that conversation we started to have which is, "What do you do about it?"

Yeah, that's really tough.

The universal income seems like just a path to inflation. I don't know. I'm not an economist. For my role, as a designer in the world, we keep looking for ways to try and express AI in the most human moments in life. How to, for example, give us better control over the homes around us.

But I feel, in a lot of ways, sometimes, as a designer, at this moment in time, a little bit like what—I don't want to overstate this but—a little bit like the folks designing the nuclear bomb may have felt like. They were advancing technology in the interest of technology, and it was sort of a passionate expression at the time. But, at the same time, they could tell, "This is maybe not going to turn out right."

You know, that's sort of an overstated comparison, but the idea here is that we in software and design are helping advance the cause for a lot of products that ostensibly have great purposes for everybody in society but a lot of them—let's say, designing a better experience for Uber—don't seem to be netting out the way they should.

Let's take the work for CognitiveScale. To me, that's the most relevant example. We're working with this company that makes AI systems, and it helps people like doctors or financial analysts think. It helps them answer questions. It helps them look ahead or look at large, large data sets and deliver to them things that they might not have realized or been able to find themselves—essentially the needle in the haystack.

For each of those customers that employs those systems, they will be potentially thousands of times more powerful than the next guy. That's a huge tipping force. Could it be that all of them adopt it uniformly and the world of finance or the world of medical care all gets better at once? That'd be awesome.

But, at the same time, we're dealing with an extremely competitive and a very non-democratic business environment. And so, I don't see it necessarily happening that way. So that's the concern side of the argument. We're giving a select few these really immense superpowers, and what are the ramifications of that?

Of course, I don't think, practically, the financial analyst or the doctor is anything in particular to worry about. But, if we imagine extending this out to average consumers, these things aren't going to be free. It's not like the US government is going to distribute these tools. It's going to be something that people charge for; that people with better Internet access, better financial capabilities have access to, and it creates further imbalance. That, to me, is the downside to the sort of magic that we're creating day-to-day.

And so, what do you do with that? Do you just kind of stew on it and then just file forward?

I don't know yet.

Like, feel appropriately guilty about it?

Yeah, it's an awesome question, Byron. I don't have a great answer. I wish I could just declare, on this podcast, the fight. But this is early enough, that to try and declare an answer would be premature. Because I may be totally wrong, and you may be right, and I should just simply weigh in on a better future. I feel much like a lawyer who is faced with defending someone. I suspect this guy might be guilty, but my job as a lawyer is to make sure he gets free. My job as a designer is to create better human experiences, even if some of them might not drive a net society improvement.

If something looks like a gun and its purpose is ninety-nine percent to deliver harm, then that's pretty easy for a designer to avoid. But the topic of AI has brought us closer to this question of, "Is design really driving the shape of society?" than ever.

For years, we've designed things like toasters or music players, and they had a natural place in society. You weren't really reshaping—you could make the toaster more fun, you could make the music player easier to use, but they weren't really that tied to what it meant to be human. But to design a decision system really does start to get into the heart of what it means to be somebody. And I'm not sure, as designers, we've been introduced to the toolset to think through that—you know, the social ramifications of the problem.

Right. But is it really all that different? You had a time in history where some people could read and some people couldn't, and the people who could read, they were financially a lot better off, right?


And then, you had people who had education and people who didn't, and the people with education were better off.

Yes, true. You're weighing into exactly the case I'm describing. In both of those cases, society was crippled until they decided to offer education broadly, until books were cheaply printable.

Well, you could say "crippled," but it was on a path. And then, eventually, you got computers...

When I think about before the printing press, the people who could not read could be told just about anything by the people who had very few books.

But the thing is, technology in the past—again—always lowers the price. These things expand over time. More people have access to them. More people go to school. More people are literate. More people have computers now. More people have access to the Internet. All of these things just show that it eventually works.

Eventually, yes, and I go back to my statement: I believe this eventually nets out to a beautiful society. But we're a much more destructively-capable society today than we ever were. And all of those paths you talk about—the path towards unified education, path towards even introducing books—involved lots of wars as the sort of asymmetry of people moving into various stages of modernity occurred. But there was only so much damage that they could do to each other. Today, the level of damage we can do to each other may surmount the complexity of getting through those similar stages.

I'm saying that the near term is going to be painful, but the net opportunity in society is fantastic. I am on the optimistic side with you.

I don't know. You say that, but you're talking about building a nuclear weapon—that's what you feel like you're doing.


This is a debate that happened in this country a long time ago when people saw the Industrial Revolution coming. There used to be this idea that once you learned to read, you didn't need school anymore. And there was a very vocal debate in the United States about "post-literacy education," which I think is a really fun phrase. And the United States, because it deliberately decided everybody needs more education, became the first country in the world to guarantee high school education to everybody.

I want to switch gears a little bit. You wrote a really fascinating article called, "AI's Biggest Danger Is So Subtle, You Might Not Even Notice It." The thesis of it, in a sentence—and correct me if I'm wrong—is that, behind all artificial intelligence is a programmer, and behind every programmer is a set of biases—either deliberate, explicit or implicit, either things they know or they don't know—and those get coded into these systems and you don't even notice it.

That's a good summary. There's been a lot written about this since. I may have been a little early on it, because I thought the reaction was interesting.

Working with CognitiveScale, one of the things we're doing that's most interesting... And they're facing this issue of, every time they build these systems, they're one-offs. Most AI systems tend to be one-offs, and it's very difficult to tune the intelligence.

In other words, it's a dark art. We don't know exactly how these machines are coming out with their conclusions. We're pouring so much complexity into these agents, and the models, and the processors that are transforming information, that it's hard to predict how one set of questions or one set of inputs might come out of it at the end of that process.

You know, we're just testing them against sample cases. But sample cases don't give you a total assurance of how the system is working. It's like your match example: Ninety-nine percent of the time, somebody screams; so you just assume that it's wired right. But one percent of the time, somebody is really happy about that fire, and the machine breaks. Sorry, I'm digressing...

To me, what's really interesting is that there are a lot of commercial interests who would like—let's say, my drive home, since we've used that example—who would like me to drive a certain way, because they want me to go by their restaurant or by their business. And it only takes the slightest bit of a twist to that data to slowly mold a population—whether it be a driving population or an eating population or a buying population—to behave a certain way.

Normally, these things are externalities so they are easier to legislate. Like, how much signage are you allowed to put up in a city? What kinds of things are you allowed to say in an advertisement? Those are things attempting to shape our minds, right?

But, when you have direct decision systems—if you go back to what I was describing where we're becoming more and more like the bicameral mind—inseparably associated with the digital systems that advise us, those things are now, you might say, black boxes in our minds—trying to get us to eat at McDonald's, or drive that certain way, or invest in a certain stock. Much more difficult to legislate, much more difficult to police, to even discover that it's happening to us.

That's the concern, and there's very little early, kind of, rule sets or policy around how to protect against that. And we're building these systems at an incredible rate.

You're, of course, familiar with European efforts to basically say: "You have the right to know why that artificial intelligence made the suggestion about something to you, if it affects your life."


And so, a) is that possible; b) is that good; and c) what is going to be the ultimate outcome of that debate?

Oh, I don't know. I applaud Europe's attempt to do this. It's a bit ham-fisted because the delta between these technical systems and the politics of legislating it is too great. They just don't know what they're dealing with, so they tend to do it in kind of brute force ways. The companies are still young enough that they're not on the wrong side of the argument, they're just trying to get them out there in the quickest, most brute force way, and are less concerned about the lasting effects.

I think, as designers and developers creating these systems, there's not enough in the stack, you might say. It's actually one of the things we're trying to do, and one of the goals in the UI work with CognitiveScale—and sorry, I'm not really trying to pitch you hard on CognitiveScale, I'm just saying this is where our direct experience lies—is we have these problems that are right in line with this conversation.

For example, the system comes out with an insight: It tells a doctor, "Hey, there are these eight patients you should contact, because I believe they're going to have an asthma attack." Of course, the doctor looks and goes, "Why do you think that? It doesn't make any sense to me as a human. I can't see why." So, we have to unfold the argument quickly and in human terms, even though the computation that might have arrived at that conclusion is much denser than most humans can comprehend.

It involves a larger data set, or a larger computed set of circumstances than is easily told. And so, the design of those problems is really tough and it's just very, very early in that process. Again, part of it is because there's not enough in the stack, you might say.

You know what I mean by the stack? Early operating systems, the underlying firmware that talked to the hardware, the data management system, and the applications were entwined as a single entity. Of course, we've eventually built these as many, many independent elements stacked on top of each other, which allow programmers to edit one of those layers without destroying the function of the whole system.

Today, in AI, it's a similar problem. We have too much of a combined stack. And so auditing how the system is thinking—where did it find these conclusions, and what data is it drawing on—is really tough, especially if you're not a programmer. It's still the domain of a lot of specialists, a lot of scientists. If you're just a financial company or a hospital chain trying to use these systems, you're an expert in your field—like healthcare—but not an expert in AI, so it's really tough to employ these systems and trust them, and understand, "Why did you come out with that conclusion?"

You also wrote a piece called "Computers Will Prove to be More Creative Than Most Humans." What's your thesis there? And before you answer that, what is "creativity"?

Let's start with a simple—we'll call it a theory—just call it the Mick Jagger theory. Mick Jagger is a rock musician, you know, The Rolling Stones. If you look at Mick, he's not that pretty. He doesn't, on any sort of technical sense, sing that well, and he dances really kind of funny. But something about how he assembles all that has endeared us to him, and it's this force of will that makes him, in design terms, sort of a winning design.  A lot of rock and roll represents this. Rock and roll turns out to be a great example of that kind of decision factor in humanity.

You learn this in product design. You could be very technical about what your audience wants and still utterly fail. It is quite often a very singular human expression, various set of accidental factors, that turn out a magical design.

To that end, what I was trying to propose was the computer—through its willingness to try things, non-obvious things, and its ability to sift through more ideas than we could—may one day lend to something we really define as creativity.

Creativity is sort of the nexus between what we think we need, and what we didn't expect to see. We tend to register that as, "Oh, that was creative!" It's both suitable—I can imagine using that or enjoying that—yet I didn't think that was going to be the answer.

Given that nexus, I think computing—especially this intelligent processing of possibilities—is going to be extremely powerful. The reason I wanted to introduce the topic is not really to threaten designers, even though that sort of ended up being the quotable part of it, but to suggest the safe zone that a lot of people were talking about at the time that I wrote that—that creative fields of humanity are safe from AI, and that it's really just people doing manual labor—is wrong. There are lots and lots of creative tasks that will be highly-performant through an AI system. It may be first through symbiosis, putting a designer and a computer together, which is happening to a degree now.

In fact, it worked with a London 3D software company that helps people like shoemakers go through a range of shoe possibilities, and one of the things the software is really good at is arraying out all of these possibilities of texture and materials and color. And then, the human, of course, has a really easy job of just picking. "Oh, I think that looks cool." It helps them in the process. You might say it's making them more creative.

Well, eventually, the computer could have enough information to make some of those choices on its own. Maybe not in that exact circumstance, because fashion is sort of the last bastion of elusive human behavior. Fashion is often nonsensical, but it works explicitly because of that. But in so many other fields, the sort of near-styles of creativity, I think that's very possible.

You know, you started at the very beginning of this talk, to speak about, "We will probably get an AGI sometime in the distant future." I'm really intrigued by the fact that recently, we had Elon Musk saying, "We're going to get one really soon, or relatively soon, and it's going to be bad for us." Mark Cuban kind of threw in his lot, "Yeah, it's going to be kind of bad for us." And then, you get Zuckerberg who says, "No, no, no, it's going to be way far out and it's going to be good for us." You get Andrew Ng who says, "Worrying about that kind of stuff is like worrying about overpopulation on Mars."

When you really distill it all down, you get the range of people who have some arguable case that they have a relevant opinion, they predict sometime between five and five hundred years.

My questions to you: first, why do you think that is? In any other field of life... If you ask ten astronauts, "When are we going to get to Mars?" It's going to be twenty to fifty years, you know. It's not going to be five to five hundred. And so, why do you think that these people are so far apart in what they think? And second, where are you in that?

Where am I? Yeah, okay.

First of all, I think most of those answers are more telling of the person than the topic. You know, AI is very political right now. And all of those folks are very much, as an industry, they're not scientists, they're—as an industry—very vested in the idea.

For example, Mark's answer was, I think, driven by his desire to sound like and be an optimistic voice in tech. Facebook profits from technology optimism. They need people to feel safe around tech for Facebook to continue to grow. Whereas, Elon Musk is much more about infrastructure. And so, for him, anything he talks about in tech needs to sound big and amazing. Trips to Mars and talking about AI in that way makes sense.

I still would go back to... I think the idea of talking about general AI, the kind that the average person would recognize, is a silly conversation. It's not likely to happen for a hundred years in the way that you would maybe think, to sit down and have a beer with it. Maybe that will never happen. I think we'll get to the point where AI is much more life-changing—dominant in our lives—way, way before then.

So, the question will become moot, like asking about overpopulation on Mars. I don't know if this answer is going to be crisp enough to net out in a one-line statement, but I agree with the guy that it's not coming anytime soon. But I do say, very strongly—and I'm seeing it directly with the clients we're working with—that AI that is dramatically impactful to the shape of society—in how individuals think of themselves, how they interact with other individuals, how they compete in business and in social means—is going to dramatically reshape, and potentially upend society in not-too-long of the future.

I say that as in, it's happening now, so it's already started. And you might say, for the next twenty years, it will be pretty dramatic, and increasingly dramatic. But I don't know if there's a sort of gating moment. You know, all of these questions seem to sound like you're turning a timer on some toast. This isn't that. The toast doesn't just pop up out of the oven and say, "We've got AI now."

We have it today. We'll have more of it tomorrow, and we'll have more of it the next day. You can already see society reshaping to competitive capabilities around intelligent systems. I think it's here. Maybe that's my sort of net answer.

I'll put you down for seventeen years. My final question is this. You seem to obviously be a guy that thinks about this stuff a lot, and you've made a few references to science fiction. I'm curious—if you read, watch, consume science fiction—is there some view of the world that you think, "Yeah, that could happen. That is something that they have right, I think." Is there anything that's influenced your thinking from that genre?

Honestly, when I read science fiction, it tends to be a little more... I thought some of the thinking going on in the ‘50s was really interesting, like Philip K. Dick's work. I know some of it that's much more magical, or dealing with the apocalyptic side effects. But, connected to that, the vision in Blade Runner—well, you know, skip the android question and skip the perpetual rain; I'm not necessarily going to worry about those things—the part that is really interesting is society is built upon itself.

To me, that's the most pertinent part of trying to envision the future. As a designer, you know, fashion piles up onto itself and refers to itself. Architecture does the same thing, not only in terms of style but, literally, we build onto our past, we upgrade buildings, we renovate them. In a sense, social behavior does the same.

To me, my influence or closest reference would be looking at those aspects of what we see in Blade Runner, and we'll see a lot of society grappling with very historical attitudes about who we are as individuals, how we should associate with each other, how the economy is supposed to work—yet trying to retrofit into that some very, very dynamic new realities.

And I don't think they're like better toasters, and that's the part where I think maybe you and I disagree strongly. I don't think, in these historical analogies, we came up with a better tractor. I think these new layers of technology reshape us in a way that is not that comparable to the past. Rather than better tractors, we have greater minds and greater reach with our minds. That part, I think, is the most interesting.

To close on that: Three hundred years ago, William Shakespeare lived and wrote, and he wrote these plays that we still watch today, and we still read today. They still make movies with Leonardo DiCaprio in them. Three hundred years later, in a world that has changed as dramatically as you could imagine, people still know an Iago and Lady Macbeth, and still have love triangles and still have family rivalries, and still have all of that stuff.

You watch Shakespeare because you still recognize those people; not because it's an alien world, but it's like, "I get that." So is your thesis that, in fifty years, Shakespeare will no longer make sense to us?

Oh, no.

So, we really aren't going to change, are we?

That would be history building on itself. We will change in some layers. So we may tell a story about Romeo and Juliet where they get to know each other only in their minds, and have yet to ever meet. But the story fundamentals are the same. The human passions are the same. I love the latest Romeo and Juliet from, what's his name, the Australian guy... Anyway, he did that really super hip interpretation of Romeo and Juliet and he changed a whole lot about it. It was real street culture-focused, but it was still the same story underlying it.

So, yeah, I do believe that part: Human passions—finding love, finding each other, understanding yourself, understanding the world around you, being important to the world, having some sense of relevance—those things are persistent. They're, sort of, million-year-old truths about us. But how that happens I do think is critically fundamental. The struggle to identify yourself may today involve a lot of subsystems that aren't flesh. The means of understanding the world may come with an argument about access to a layer of technology.

Well, let's leave it there. It was a fascinating hour, Mark. I hope I can entice you to come back, and I feel like there's still a whole lot of ground we didn't cover.

Oh, yeah, there is. It's fun talking about the philosophical stuff, and I'm glad to disagree with you on some of those things. It makes it fun.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here