Episode 55: A Conversation with Rob High

In this episode, Byron and Rob talk about IBM Watson and the history and future of AI.

:: ::

Guest

Rob High is an IBM fellow, VP and Chief Technical Officer at IBM Watson.

Transcript

Byron Reese: This is Voices in AI, brought to you by GigaOm. I’m Byron Reese. August 12th, 1981. That was the day IBM released the IBM PC and who could have imagined what that would lead to? Who would’ve ever thought, from that vantage point, of our world today? Who could’ve imagined that eventually you would have one on every desktop and then they would all be connected? Who would have guessed that through those connections, trillions of dollars of wealth would be created? All the companies, you know, that you see in the news every day from eBay to Amazon to Google to Baidu to Alibaba, all of them have, in one way or the other, as the seed of their genesis, that moment on August 12th, 1981.

Now the interesting thing about that date, August of ‘81, that’s kind of getting ready to begin the school year, the end of the summer. And it so happens that our guest, Rob High graduated from UC Santa Cruz in 1981, so he graduated about the same time, just a few months before this PC device was released. And he went and joined up with IBM. And for the last 36/37 years, he has been involved in that organization affecting what they’re doing, watching it all happen, and if you think about it, what a journey that must be. If you ever pay your respects to Elvis Presley and see his tombstone, you’ll see it says, “He became a living legend in his own time.” Now, I’ll be the first to say that’s a little redundant, right? He was either a living legend or a legend in his own time. That being said, if there’s anybody who can be said to be a living legend in his own time, it’s our guest today. It’s Rob High. He is an IBM fellow, he is a VP at IBM, he is the Chief Technical Officer at IBM Watson and he is with us today. Welcome to the show, Rob!

Rob High: Yeah, thank you very much. I appreciate the references but somehow I think my kids would consider those accolades to be a little, probably, you know, not accurate.

Well, but from a factual standpoint, you joined IBM in 1981 when the PC was brand new.

Yeah - I’ve really been honored with having the opportunity to work on some really interesting problems over the years. And with that honor has come the responsibility to bring value to those problems, to the solutions we have for those problems. And for that, I’ve always been well-recognized. So I do appreciate you bringing that up. In fact, it really is more than just any one person in this world that makes changes meaningful.

Well, so walk me back to that. Don’t worry, this isn’t going to be a stroll down memory lane, but I’m curious. In 1981, IBM was of course immense, as immense as it is now and the PC had to be a kind of tiny part of that at that moment in time. It was new. When did your personal trajectory intercept with that or did it ever? Had you always been on the bigger system side of IBM?

No, actually. It was almost immediate. Probably was, I don’t know the exact number, but probably I was pretty close to the first one hundred or two hundred people that ordered a PC when it got announced. In fact, the first thing I did at IBM was to take the PC into work and show my colleagues what the potential was. I was just doing simple, silly things at the time, but I wanted to make an impression that this really was going to change the way that we were thinking about our roles at work and what technology was going to do to help change our trajectory there. So, no, I actually had the privilege of being there at the very beginning. I won’t say that I had the foresight to recognize its utility but I certainly appreciated it and I think that to some extent, my own career followed the trajectory of change that has occurred similar to what PCs did to us back then. In other areas as well: including web computing, and service orientation, now cloud computing, and of course cognitive computing.

And so, walk me through that and then let’s jump into Watson. So, walk me through the path you went through as this whole drama of the computer age unfolded around you. Where did you go from point to point to point through that and end up where you are now?

Not long after I began work for IBM, I had an opportunity to take a role in the development of our ATMs, so automated teller machines in our banking industry, development organization. I was, I landed actually working on how to do service and maintenance on these ATMs, writing software for doing dumps on them. But actually, it wasn’t long after that, that I had the chance to influence the thinking around the use of personal computers as the underlying engine for building these ATMs. Up until then, the hardware, the underlying hardware was all proprietary to the problem of creating an automated machine, a machine that people could interact with that could provide teller functions. We saw a potential for other applications and so, we sort of shifted our thinking from being dedicated to ATMs to being more generally applicable to a variety of different self-service terminals and we built that on PCs. We had a X86s at the time, actually 86s and 8186s, later 8286s and that progressed forward.

I was on a team that tried to create a multi-threaded version of DOS, so that we could manage the various IO functions that were being performed in a machine concurrently, whether that was reading your mag stripe or handling display updates or dispensing cash. There’s a lot of parallel processing that was needed for that. Later, that led to work with the OS 2 team, the kernel team to create a headless version of the multi-threaded operating system, which was later superseded by Windows. And then, from there, moved into other forms of banking services including what I consider to be one of the first examples of an underlying middleware that, at the time, we called Application Foundation, that later led to my involvement in distributed object programming, distributed systems and object programming within those kind of distributor systems. Which later then led to I think we called the Soft Object Server, which later graduated into a thing that we now call WebSphere.

I was involved with the creation of the very first component model distributed, component model that later served as the foundation for J2EE Java 2 Enterprise Edition, EJB as in Enterprise Java Beans. The EJB specification was modeled after work that I did early on around server component model and then from there, led the WebSphere team, graduated from that to leading our SOA Foundation for IBM. Realized that one of the key things to anything that we do in the area of technology is creating an association to value, business value, of course is one of the key things we tend to measure, but wouldn’t be of any value of any sort, if it’s not valuable to people, then doing the technology is really not worthwhile. From that, I developed a premise that rather than technologists attempting to figure out how to adapt the technology to the value of the business, how about if the business actually figured out how to adapt their needs to an expression of requirement on technology. That’s not necessarily a novel idea, but I wanted to make that very practical… So we started a project that I called BPSO, the Business Partner Support Organization, which I was in the middle of when they came to me to ask me to take over as CTO for Watson.

Wow, what a walk. So, from PC DOS to OS 2 to Java to Web servers to Watson. ATMs are really an interesting use case because it was predicted they would destroy jobs and yet, what we have today are more tellers than we had when the ATM came out. The ATM lowered the cost of opening a bank branch, which meant banks opened more branches and need more tellers. Did you, do you think that’s a dynamic we’re going to see over and over again as artificial intelligence and automation goes into different areas? Is it actually going to create demand in new and surprising ways?

That’s actually the trend, I believe, that has occurred with virtually every new technology since the advent of creating technologies. Creating tools. I make the point often that some of the earliest tools, hammers, axes, later shovels, and what became hydraulics, and so on and so forth, all of those have sort of had the same fundamental characteristics, which is they tend to extend and amplify our human strengths. And that process of amplifying, finding ourselves, amplifying the things that we do, while it will change the nature of the work that we do, I mean we’re no longer scratching holes in the ground with our fingers, but we’re able to then do so much more with that that it opens up all kinds of possibilities. Things at the time that when we created tools we probably weren’t thinking about. You mentioned earlier about the advent of the PC and how that’s changed our industry. Think back, only just 10 years ago, and the advent of smartphones and what that has done to change the nature of how we operate on a daily basis, as individuals, as human beings. And I do think the same effect is starting to grow here with AI. And while we can already see some of those benefits in terms of offloading some of the mundane and tedious tasks that people perform, so they can open their time to doing more creative things, I don’t think that we know yet the extent to which we’re going to find new uses, new utility, new value for this technology as we go forward.

I’m in total agreement with you on that. That one side of the equation, “Oh my gosh, that job’s been eliminated” is really obvious in the front, and that, “Oh my gosh, these 10 jobs are created down the line because of that” that’s all kind of… not obvious. So, you just see the obvious half of the equation. I’m completely on board with you there. That being said, we do have cases where technology has had a dramatic effect on overall employment, negatively. The number one case touted about “oh my gosh, automation’s awesome” is, for 10,000 years, it took 90% of us to grow our food. Now it’s 2%. There’s no question there’s fewer people in the food industry than there were 100 years ago, 1,000 years ago, 10,000 years ago. So that happens. So, in your head, how do you paint the offset to that? How do you say, “Yeah, that happens, but we maintain full employment the whole time?” What do you think is going on there for the dynamic that offsets even areas where automation does unequivocally eliminate sectors?

Yeah, one thing we should keep in mind here when we talk about this, is that while what you said is true, that is a fraction of a percent of the number of - the percentage of the total number of people working in the food industry as a percentage of the population is smaller, the population itself has also grown at the same time. So I don’t know where that lands in terms of the actual physical number of people but that said, I mean, we need to be mindful of that. We also have to be open to it. Nothing is really ever as extreme as we tend to paint things. I’m not going to discount the fact that there are some people who won’t be doing what they were doing before. And that does happen and when it happens, it can be very disruptive for those individuals. More often than not, what I’m seeing right now, in the case of AI, this won’t replace entire individuals. It doesn’t sort of take and eliminate what that person does for a company or anything else that they do. It does eliminate some of the tasks that they perform and so, an example of that would be in a call center. We see a call center agent that today, they are up until now, have had to answer a wide range of questions that people ask when they get on the phone or get in through a contact channel. Everything from “What are your hours of operation?” all the way through to, “Hey, I’ve got this product I just bought and it’s not working. I need help fixing it.” And if you look at that range of problems that people present, there’s a fairly large percentage of those problems that are really pretty tedious and mundane and don’t bring a whole lot of satisfaction to anybody.

Knowing what your hours of operation are? I don’t know if I need another person there manning the telephone to tell me what time they’re open. I can look up that on the web. I can look that up through an IVR. I can get it through a conversational agent, not only does that help me as needing an answer to that question, get to that answer quicker, but for the person at the other end, that’s kind of tedious and mundane question to have to deal with all day long if that’s the only thing you’re dealing with. If instead having something else answer these questions for you, frees you up to now go and work on really interesting problems. If I as a client have a problem, I begin with a fairly simple question that can be answered by a conversational agent; eventually I might get to the more important, salient, and challenging question. If at that time, I turn it over and talk to a human, that call center agent not only is going to be able to address that issue in ways that today most AI systems aren’t even capable of, but for them, they get to go home at the end of the day feeling like they did something really interesting, really useful. They helped someone in a really meaningful way.

For the consumer, that means they got their problem solved that much faster. So there’s kind of a win-win all the way around in a lot of these situations that isn’t really about people losing their jobs, it’s about freeing them up and giving them the time and energy to go concentrate on more creative problems, creative issues, exercise their own creativity in ways they simply couldn’t before, because of the amount of time they spent doing the tedious stuff.

Fair enough. So to change the topic just a little bit, let’s talk about transferred learning. So humans have this great ability that we can be trained on a sample size of one thing. I can show you an object you’ve never seen before, some weirdly-shaped object, and I can say, “Find this object in all these photos” and the object could be partially obscured, it could be underwater, it could be covered with molasses, it could be chewed on by a dog, it could be any number of these things and we’re like, “There it is! There it is! There it is! There it is!” But machines, they have it, and they seem much more rigid and brittle, but what are we doing there? What are we good at that we haven’t figured out how to teach machines to be good at yet?

Well there’s a degree of sophistication in our human mind, in the structure of a human mind, the pathways of analysis that we execute. There’s just a degree of complexity and sophistication, beautiful complexity, if you will, that we hardly understand for ourselves, let alone have figured out how to replicate in computing systems. We talk a lot about neural nets in computing systems today, as sort of the underlying technology for AI systems. But those neural networks that we’re emulating in software or sometimes now even in hardware, are really, by comparison, very simplistic, very primitive. And so, a lot of this comes down to not having nearly enough complexity in the computing system, let alone then trying to understand what kinds of complexities are important and valuable. And value is of course another big part of this. Where is the motivation that drives us to want to increase complexity when in doing so, we also have to deal with the consequences of that complexity? So I think that a lot of this is where we are in time; some of it is around our basic level of understanding. And then a great deal of that will be determined over time by the economics of our field.

If you hold a general intelligence out, an AGI’s this thing that we kind of intuitively know what it would be like interacting with. We have a conversation with it, and it can do a range of things, a lot like you can. So I often ask my guests, “do you feel like our existing techniques that we have that’ve given us narrow AI, that’ve given us Watson, that’ve given us these amazing things, are we on the way to that? Or is that thing, that AGI, a completely different thing? And it only really shares the words artificial intelligence; that’s kind of just a linguistic quirk. It has nothing to do really with the kind of stuff we’re building today?” What would you say?

Well first of all, there’s been a historical debate about what people mean by AI. I think in the early days, in our naivety and hubris, we tended to gravitate more towards this, as you said, intuitive notion that AI ought to be about replicating the richness and breadth of the human mind. The way that Von Neumann put it, there ought to be as many… computers ought to be able to answer as many questions as there are people to ask them, is what it’s saying. So I think that there was this perception that computers ought to and AI ought to lead us to sort of the same kinds of intelligence that we see in human beings. I think since then, we’ve come to understand two things: One is that it’s really no longer about replicating the human mind for the sake of replicating the human mind. It’s really about doing things in computers that benefit human beings, so I prefer the term augmented intelligence over artificial intelligence because I think that it comes closer to representing what’s useful and meaningful.

The other thing is, I think we have the challenge in defining what intelligence is. This again is another debate that I think that has been going on for decades. What is intelligence? Let’s just assume that our own human intelligence could be clearly defined and quantified. First of all, that what we have established as human intelligence is really the product of eons of evolutionary selection that required us to create a certain way of thinking as a way of surviving, as a matter of survival. I don’t know that that’s the kind of intelligence we would demand of a system or computer that is intended to help us with practically anything. What we want its intelligence to be is the kind of intelligence that we don’t possess; the kind of intelligence that is in that space beyond where humans tend to focus our attention, the things that we’re really good at. And what’s really neat is the form of intelligence that augments that, that amplifies that, that does the things we’re not good at, like assimilating the relationships and the connections and information at massive scale. We’re just not good at that.

Even our ability to recognize, which is the dominant form of AI today, is around recognition tasks. A machine can see things at a level of resolution that our own human eyes cannot, that our own human brain cannot fully resolve. So there’s places where we need help and we can benefit from that help, where a machine can do certain things that not only are we not capable of doing, but we don't’ really have much interest in doing. It’s not about a jobs problem. It’s about how do we do our jobs better because we now have this tool available to us to help us do things we couldn’t do before.

Yeah, it’s true. We don’t have a consensus definition of intelligence. We don’t have one of what we mean by artificial. We don’t have a consensus definition of the word life or the word death. The words… seeing, recognizing, understanding. We don’t have a consensus definition of creativity. I mean there are a lot of these things that only exist in nebulous concepts. That being said, it’s interesting to me that science fiction doesn’t just kind of predict the future. In a way, it makes it. There was an X Prize to make a tricorder like Dr. McCoy used. And Uhura, she used the Bluetooth device. Any way you look at it, that was a Bluetooth device in her ear. And you see these things up on the screen… When I have people on the show, they often tell me, specifically with Star Trek, they’re inspired by it. And more than once, people tell me “I want to build the Enterprise computer.” You can just ask it anything and it will answer it. And that is, the enterprise computer didn’t, Majel Barrett voiced it. It didn’t really have any personality or anything. Then you go one step beyond that and they want to build Commander Data. And the enterprise computer, but it also had “personality.” It’s a hard word since it’s got the word person in it. So, let me ask it a different way. Can we build the Enterprise computer, which is a Von Neumann Machine that can answer any question? And can we build Commander Data today? Could we? Will we? We seem to be wanting to because we build all these devices, that, if I say any of their names, they’re all sitting next to me and they’ll perk up, but we have all these devices that we’re building that seem to want to do that. So, are we going to build that?

So, first of all, let me say that I think that the task of being able to infer answers to questions based on conducting a survey of literature, is very real. I mean, that’s certainly approximately what Watson was doing on a game of Jeopardy… reading hundreds of millions of pages of literature and finding in that literature recorded evidence of answers that were applicable, that could be evaluated and measured against their applicability to the question being asked. So the answer to that clearly is yes. And whether that’s done against literature, done against more structured sources; that seems to have a lot of utility. If nothing else, it becomes the next generation of what we used to think of as search. Instead of putting in keywords, you get answers or potential sources of answers. We’re now getting to a point where we basically ask a question and get those answers, those questions answered from facts from literature. The next phase of progress is I think going to be about deductive reasoning, which is deriving answers, inferring answers, where the answer was not previously recorded, [but] rather it has to be kind of made up, if you will. Constructed.

Now, I expect the vast majority of that will be centered around forms of deduction that rely on axioms that are relevant to a particular domain. So if I wanted to ask questions about the probability of somebody getting in an accident in their car and having to claim damage, that’s something that we’ve done typically from actuarial tables and other forms of predictive analytics using quantitative data. If we add to that some qualitative aspects of life, whether that’s knowledge about how people’s emotions react to different weather circumstances and stuff like that, then we might be able to do even a better job applying those axioms, building answers to questions like “Is this person going to get in an accident today?” I think that where it becomes much more problematic to predict where this technology will go is when you start thinking about what we call adductive reasoning, which is the process of reasoning that generates those axioms. What happened eons ago that caused someone to realize that when adding two numbers together you get a product, you get a sum, that could be reliably produced over and over again, given two different pieces of information.

What were those axioms, how did that person come up with that axiom? What was the process of generating the axiom of mathematics? And of course, I can give you a simple example but think about all the more complex bases. How does the doctor come to understand that when they see a patient with a certain combination of symptoms that they ought to be considering this test or that test or this diagnosis or another one? There was a creative process involved there that I think still eludes us when we think about this from an AI standpoint. Now you also asked, “Should we expect that, will we do that?” I think then we’ve got to start thinking about economics and what is valuable to people to have the AI systems participate in? Because at the end of the day, people are not going to invest in creating these technologies beyond an academic consideration unless there is some way of creating value from the creation of the technology.

And if people aren’t going to pay for it, they probably won’t do it or they won’t do it for very long. If you do pay for it, then they will or if it has some sort of other economic [value], I say economic here very loosely because it doesn’t have necessarily have to be for financial gain. It can be for the gains or benefits that we get when we’re able to form better and tighter relationships with each other or when we’re able to increase our leisure or enhance our entertainment. Whatever. I mean there needs to be some sort of incentive that motivates the creation and the sustaining of these technologies. We’ve seen history littered with lots of failed technology advances that didn’t make sense. Go back to Star Trek. Certainly with the flip phone, you could say it was a good approximation of the Star Trek communicator device. You flipped it up and I’ve seen stories of the early inventors of the flip phone attempting to emulate that science fiction.

But that’s funny, you don’t see flip phones much anymore. We’ve moved beyond that technology that was inspired by our imagination [and] turned out to not have as much utility as being able to touch a screen and manipulate icons on that screen.

Some of the use cases though for an artificial intelligence that can interact with people, and you gave the simplest one, include a call center to handle the simplest most straight forward questions that would drive a human crazy to do. But you go to the other extreme of that and… caregivers for the elderly is often cited, daycare workers, and greeters… all sorts of places where you would expect in a human to find some amount of emotional engagement. Do you have any worries or qualms or concerns about giving these systems human voices, human names, and getting them to simulate human emotion? Let’s say about how Weizenbaum ended up feeling about ELIZA… that there’s something inherently dishonest and misleading about doing that? Or is that not even something you think about?

No, I actually do think about this and I am worried about that. I do believe that it is important that we be honest and transparent in both the delivery of these technologies as well as in our demands of how we use them. We formed with a number of other companies around this, the Partnership for AI, in which we sort of debate some of these ethical issues. And certainly as a group, we’ve come to the conclusion that AIs should not represent themselves. They should not attempt to pretend or masquerade to the end user as being another human being. And that’s not saying they won’t take on some anthropomorphic properties, because there is a benefit that comes from trying to communicate with human beings to be able to do so in a way that people recognize as having meaning.

For example, when you and I are talking as human beings, it’s not just our words that we’re exchanging. We’re amplifying and punctuating those words with inflection and intonation and cadence in our vocalization of those words. If you were in person, we might also be feeding off each other’s body language or eye contact or facial expression, hand movements, all those things that help human beings understand the meaning and the intent behind the words. So there may be some anthropomorphisms that get engaged, but the net result should not be to suggest that any of these digital assistants, these conversational agents, are in fact human and no one should ever be misled into believing that what they’re interacting with is a human being, when in fact what they’re dealing with is a digital agent.

The thing about Weizenbaum though is, everyone knew ELIZA was a robot. He wasn’t concerned about full disclosure. Not a robot, a bot. He wasn’t concerned about full disclosure. He said even fully knowing it, that you project all this onto them, like it has a name… It’s like you might treat an intelligent agent one way if its personified in a human and you might treat it a completely different way if it were a personified as a giant walking banana. As a human, you give it all of this cred, you assume that it feels and it cares and you just project all this stuff onto it.

That’s right.

And so you can, 12 ways to Sunday, tell everybody “I’m a robot.” It could be wearing a shirt that says I’m a robot but if it looks human enough and sounds human enough, people will still… I guess what I’m saying is, I noticed that when I interact with even the very rudimentary devices I have now, and one of them is going off and telling me way more than I want to know and I say, “Stop!” the way I would never talk to a person. I wonder if that has a numbing effect on human empathy and the way we interact with humans.

Yeah, I’ll give you another example of why this is important. And that is, that there is a group of people in our society today that I think oftentimes we associate them with being millennials, but I don’t think it’s as limited as that. There are certain people who almost to some extent prefer to deal with a digital end point, as opposed to a human because of the anonymity that it represents and so, not only do they want the assurance that what they’re dealing with is not a human being, because it gives them some comfort in the anonymity, but also, if there were that reassurance, they we wouldn’t want to be fooled into thinking otherwise. In other words, we certainly wouldn’t want for a human being to jump on the line and continue the conversation without that end user knowing that they’re now switching from a non-human endpoint to now a human.

So it goes both ways. It’s something that we’ve got to be conscious of and something that, again as us, as providers of technology, IBM I’m speaking of here, as people who integrate these technologies into their solutions, which oftentimes are other people. As well as of course the end users. We as end users of these technologies, as consumers of these technologies, all of us play a role. We ought to sustain our responsibility in ensuring that this technology is being used appropriately, that it’s transparent about what it is, that we’re demanding it be used and provided to us in a way that it reinforces that. If 12 ways to Sunday is not enough to make that clear, add another - 13, 14, I don’t know how many different ways you have to say it, but keep on reinforcing it. What you’re dealing with here is a conversational agent at this point in time, or later, from the conversation you switch over to human, [announce]- “What you’re dealing with now is a human.” And just make it very clear to people, both so they don’t get unnerved, but more importantly, they don’t get fooled into believing something they shouldn’t.

What was your involvement, if any, in Kasparov and Deep Blue?

I didn’t have any involvement. I had the opportunity to meet Garry the other day.

I saw that on your LinkedIn and that’s why I was asking.

Yeah, I was at a speaker session where Francesca Rossi, Dr. Rossi from our team, who actually leads all our ethical discussions [spoke]. She and Garry Kasparov had a chance to speak together.

You’re undoubtedly of course quite familiar with this, so I just want to ask you two brief questions about it. So one, Kasparov said at the time, “at least Deep Blue didn’t enjoy beating him.” Do you think that someday a computer will enjoy beating a human?

I don’t. No, I don’t really want to predict that because frankly, I don’t even know what that means. And again, I would fall back on my earlier comment, I mean, why would we? Why would we ever create a system that was even simulating that sense of awareness? What kind of economic value does that really deliver? Frankly, I can’t find one.

Well, I guess there are two answers to that. We don’t necessarily have to exhaust all that. One is that it’s an emergent property that would just happen on its own. That’s what emotions are and second, somebody’s going to do it for no economic value. Somebody’s just going to do it for the intellectual challenge of it all.

Yeah, that very well may be. That’s right. I mean there’s lots of academic activity, they’re exploring areas in this space, but first of all, I’ll reinforce that they’re a long way away from getting there and none of what I’ve seen so far at all suggests that it’s going to be achieved (that is, emergent capability). But even if it does get done academically, I’m not… I don’t think that’s going to have any utility in our world.

And my second Kasparov question is: In the end, Kasparov concluded that a human… So you said you prefer augmented intelligence, over artificial. And Kasparov said, “in the end, the best chess player is a computer that makes a suggested move and a human who kind of evaluates it and chooses to take the move or not, but it’s this kind of human in partnership with an AI.” Do you think that’s really true or is that just a fictional détente that we tell ourselves to get through this transition period without having to deal with the fact now that a machine is just better at playing chess than any human who’s ever lived?

No, I believe it is. It is really the practical way of thinking about how this world evolves, how this world of technology and AI evolve… Nobody built a shovel for the sake of pleasing other shovels. And so I don’t see how or why we would expect that that would be any different with this technology. These things will take shape, they will be influenced, they will, in some sense, their evolution, their technological evolution will be driven by where we get value from them. And if we’re not, we as humans are not getting value from it, why put any sustaining investment in it? And if we do get value from it, then we will invest in it, they will invest in creating this thing, they will be sustained, but that ultimately will be shaped by a balance between the utility we get and the implication that that utility has on other aspects of life… even now, with some of the discourses occurring in the use of smartphones. We’re recognizing that all of the advantages and benefits and utility that we get from smartphones, come at a cost, which is: neck strain, a lot of broken relationships, lack of good conversation at dinner tables, and stuff like that. So I think there will be a bit of a reconciliation of that, and we’ll see that be tempered into some sort of equilibrium.

So presumably though, you were involved in the Jeopardy match?

No, not directly. I actually was engaged in the team after that, after the Jeopardy match had actually been performed, but in doing so, I had the opportunity to spend a lot of time with that team and learned a lot from them and…

Well, let me ask you two questions about that if you don’t mind. One of them is, Ken Jennings, I don’t know if you’ve seen his TED Talk on the topic, but he basically says that there was this graph they used to email him every week, and it showed a dot where he was and it showed Watson, dot dot dot coming towards him week by week by week. He said, “That’s what the future looks like. It isn’t Terminator hunting you down, it’s a machine that’s getting a little better, a little better, a little better, doing that thing that is that thing you do best.” Do you think, on that team, there was ever any doubt that eventually the machine would win or was it an open question at one point?

There certainly were open questions along the way. I know that historically, if nothing else, you can read that in some of the books [that] have been written about [it]. For example, Steven Baker wrote a book that chronicled some of the development work and you can see where the team had self-doubt along the way. I’ll tell you, even on the day of the match, there was no certainty that Watson was going to win that game. If you recall the first day, the match on the first day, Watson did ok, kind of held its own. The second day actually, it did look like it was going to lose. It actually got behind significantly. Of course on the third day, it came back and some of that is just simply due to the varying ability of the human experience. How good were the players, the contestants on that day, what were the topics that were coming up in subject, and how well were they in line with what they knew? And even though Watson had ingested 200 million pages of literature, that didn’t mean that it knew everything. Even Watson has some limitations. I think even then there was some doubt. It achieved about 83-85% accuracy, which put it right there in what we call the Winner’s Cloud. Which gave it a really good fighting chance and I think if we were to build the technology again using modern technique, it might’ve even been a little bit better.

I remember what it was really good at was trying to find information in the literature and associate that with the question being asked and that’s a fairly narrow task. While we like to think it was better than Ken and Brad at that task, that doesn’t mean that it was better than Ken and Brad. Ken and Brad as humans are better in many other ways. It was just around that one specific slice of their life.

This is probably not an answerable question, but you know that answer that’s always quoted in the articles about it is, “What is a meringue-harangue?” Watson’s answer. Do you think that’s a creative answer? Would you project creativity onto Watson or is that no different than what’s 2+2 and Watson says 4? Is there nothing about that answer that’s particularly different than 2+2?

Well, I mean, even a blind monkey gets a peanut every once in a while. Some of this is purely random things that occur, which by the way, I should point out, I think a lot of human ideas tend to be kind of random as well. So there is some utility in being able to come up with novel representations of information. But, no, I think it was inapt to the case. It was a little bit closer to executing, exercising the axioms, because it was simply deriving that from what was available in the literature.

So, one more question about this. I would love to get an update on Watson and where you see it heading. So, I don’t know how long ago it was, but Danny Hillis made a computer that never lost a game of Tic Tac Toe, but he made it entirely out of yarn and tinker toys and then, computers mastered checkers. Then they mastered Chess with Kasparov. And then, you know Jeopardy. And then Go, Alpha Go. And then poker. They’re beating top ranked poker players. Games are an interesting spot because they have confined rules and winners and losers and points and all these other things. Is there another game for computers to kind of beat humans at that is next? Or is Go the ultimate? Or is it merely the penultimate, sort of like difficult human game? Or have we kind of passed, “Yeah, they can kind of beat us at anything.”

I think there’s strong indications that when the problem being solved is really tightly defined, where the rules of the problem have no ambiguity to it or the outcomes have no ambiguity, where it’s clear whether somebody did or did not win or lose, anything like that, there’s a strong indication that technology’s gotten to the point where sooner or later, we’ll be able to contest and win at any of those kinds of situations. And that’s useful from again an academic standpoint. It helps us test out new technologies. I will also point out that rarely in life are our everyday experiences so well defined, say even more strongly, almost…. Nothing that I do on a daily basis, nothing that most of us do [is] that well-defined. Even when we think the roles are well-defined, we find that even the rules themselves have interpretations. There’s judgments being applied to that and even then, those judgments can change over time depending on who you’re dealing with, or the circumstances. That’s why we have a court system, a Supreme Court system -- their responsibility basically is to interpret the meaning behind the rules, the laws of the Constitution. So it just goes to show that those kinds of games, while certainly, I think it’s every indication that computing systems will be able to exceed at those kinds of games, we should recognize them for what they are. They’re just games.

So, give us an update as it were on Watson. How is it knocking it out of the park? What’s being done? What are the new challenges? What are the new milestones? Just tell us that whole Watson story please.

Yeah, well, so first of all, we continue to make tremendous advances in the accuracy of our recognition services, whether that’s visual recognition, or speech recognition, or in term recognition in language. So that has been a really nice thing to look at, look upon, look back on. Where I think there’s going to be more progress and some advances that will be interesting and meaningful, is in the depth of the kinds of conversations that people can create with these conversational agents, these conversational services. Specifically, where there’s a lot of interest and a lot of focus right now is how, in a conversational agent, we can move beyond simply answering the question asked or performing the task that was expressed, to beginning to get behind the problem. To get to the thing that led to that question.

An example of this is if I come to a conversational agent and I ask, “What is my account balance?” That may be something I need to know, but that’s really not my problem. My problem is getting ready to buy something or trying to figure out how to save for my kids’ education. There’s something behind that and I think a conversational agent that is able to recognize that there’s something more and has the ability to engage in a deeper conversation that gets behind that, will bring more utility to those people who use conversational agents. It’ll enable the conversational agents to help people in ways that today most chat bots are kind of constrained by. So I think we’re seeing a whole new range of utility starting to open up surrounding these conversational agents. And then of course, the other big area of advancement is around knowledge, knowledge representation, knowledge exploration and discovery, and how that then is opening up what I think of as the long tail of the types of questions people want to ask and get answered.

And so, put some meat on those bones. We would hear stories about, “Oh, Watson has matched the prognosis, the treatment plan for cancer patients, and in 30% of cases, it had something additional to add...” Give us some real world examples of how the technology is being applied in the real world and is working?

I think where we’re going to see the greatest benefit in the use cases, that will benefit the most from the improving degree of knowledge evaluation will be in any role where your responsibility is essentially to do research, right? That could be in the form of medical research, in financial product research, in product evaluation, anywhere when what we’re trying to do is get below the surface of the knowledge as stated in whatever literature we’re depending upon. Where the associations between knowledge that are either explicitly stated or implied now become relevant to our understanding. So, let’s take this into financial advisory services for example.

Today, most financial advisors rely on research that makes judgments about the usefulness of one financial product or another based predominantly, in many cases exclusively, on quantitative analysis, quantitative data. Looking at past performance, looking at the number of shareholders in that investment, looking at its basic risk profile in different economic conditions. All these things are quantitative and are evaluated today using quantitative analytics. What oftentimes is missing is any evaluation of the qualitative space. So how did that, how will that, that investment be affected by events that are only just now being discussed in the news, for example? Or in some county, or city council meeting. Those kinds of knowledge sources which we as humans rely on heavily, we won’t hesitate to go look up in the newspaper and use an article that we found there to quickly judge whether somebody is going to be affected by that. Or if there’s a discussion at city council about some new water treatment plant being created, then we know that it’s going to have some kind of economic impact on the businesses and the residents in the area immediately surrounding that water treatment plant. Now, using these cognitive systems, using natural language understanding and being able to evaluate the relationships between different entities, we’re beginning to focus that, factor that in to some of these investment decisions.

And so, if there’s an enterprise, a CEO of an enterprise out there listening right now and he or she has a company that’s got a 1,000 people and they make stuff, they make a product and they ship it, and they have customers, they have all these things - would it be a fair bet, in your mind, that they have business problems like that -- like you just described -- that Watson can inform on, buried somewhere in the data in their organization, or not?

There is a vast quantity of information that organizations today collect or have access to that is going untapped. Most organizations for that kind of information rely on the humans in the organization to go out and read all that material and keep up with it and make sense of it all. And in some organizations, they either employ a lot of people to do that or, in many cases, most cases, they simply don’t have the staff to keep up with it, so as a result, all that information is just flowing by them unused. Unvalued. And that’s a place where Watson and AIs can begin to bring some advantage.

And you feel it’s time for, if there’s someone listening and they aren’t necessarily a “high-tech company,” but still we’re now at that point in Watson and AIs’ life cycle that they should already be thinking about these things now? Or…

Yeah, first of all, the tools have gotten to the point where you don’t need to be a data scientist to make use of them. These things are a journey. You’re not going to suddenly turn on AI and it immediately affects the outcomes of your business. But, if you don’t begin that journey, then you don’t get to that point where you get an inflection in that kind of return, that kind of result. So, getting the journey started, leveraging what’s there, even if that’s in a minor way, for the minor value that it brings, gets you on a path where, over time, as you grow that, you grow your understanding, you grow your use of it, [it] will then eventually start to pop up as very measurable difference in how your business performs. So, now’s the time to get started. It’s feasible, it’s viable, it works, the tools are good, and the skills are out there now to tap into. There's lots of help if you need it. The sooner you get started, the sooner you get down that journey. And by the way, chances are, if you’re not doing it, your competition is. If they get started not only are they getting advantage of it, but they’re also going to get ahead of the curve on you because they’ve got that journey started.

Well that is a great place to leave it: a call to action with the great potential and a little ominous overtone as well. So, I want to thank you so much Rob for a fascinating hour. I have my list of questions I wanted to talk to you about. I didn’t even make it through half of them. It’s fascinating, you’re a fascinating guy, and anytime you want to come back, we’d love to have you on the show.

Thank you, Byron. I appreciate that.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.