Episode 35: A Conversation with Lorien Pratt

In this episode, Byron and Lorien talk about intelligence, AGI, jobs, and the human genome project.

-
-
0:00
0:00
0:00

Guest

Lorien Pratt, PhD is an expert in AI, inductive transfer, convolutional neural networks for vision recognition, Fourier transform for vocal recognition opportunity. Lorien invented a subfield of machine learning called inductive transfer, and helped to pioneer decision intelligence. She leads a small and large machine learning and decision intelligence projects worldwide. Lorien is also Chief Scientist and co-founder of Quantellia.

Transcript

Byron Reese: This is Voices in AI, brought to you by Gigaom, I’m Byron Reese. Today our guest is Lorien Pratt, the Chief Scientist and Co-founder over at Quantellia. They’re a software consulting company in the AI field. She’s the author of The Decision Intelligence Primer.” She holds an AB in Computer Science from Dartmouth, and an MS and PhD in Computer Science from Rutgers. Welcome to the show, Lorien!

Lorien Pratt: Thank you Byron delighted to be here, very honored thank you.

So, Lorien, let’s start with my favorite question, which is, what is artificial intelligence?

Artificial intelligence has had an awful lot of definitions over the years. These days when most people say AI, ninety percent of the time they mean machine learning, and ninety percent of the time that machine learning is a neural network underneath.

You say that most people say that, but is that what you mean by it?

I try to follow how people tend to communicate and try to track this morphing definition. Certainly back in the day we all had the general AI dream and people were thinking about Hal and the robot apocalypse, but I tend to live in the applied world. I work with enterprises and small businesses and usually when they say AI it’s, How can I make better use of my data and drive some sort of business value?” and they’ve heard of this AI thing and they don’t quite know what it is underneath.

Well, let me ask a different question then, what is intelligence?

What is intelligence, that’s a really nebulous thing isn’t it?

Well it does not have a consensus definition, so, in one sense you cannot possibly answer it incorrectly.

Right, I guess my world, again, is just really practical, what I care about is what drives value for people. Around the world sometimes intelligence is defined very broadly as the thing that humans do, and sometimes people say a bird is much more intelligent than a human at flying and a fish is much more intelligent than a human at swimming. So, to me the best way to talk about intelligence is relative to some task that has some value, and I think it’s kind of dangerous waters when we try to get too far into defining such a nebulous and fluctuating thing.

Let me ask one more definition and then I will move on. In what sense do you interpret the word artificial”? Do you interpret it as, artificial intelligence isn’t real intelligence, it’s just faking it—like artificial turf isn’t real grass—or, No, it’s really intelligence, but we built it, and that’s why we call it artificial”?

I think I have to give you another frustrating answer to that, Byron. The human brain does a lot of things, it perceives sound, it interprets vision, it thinks through, Well if I go to this college, what will be the outcome?” Those are all, arguably, aspects of intelligence—we jump on a trampoline, we do an Olympic dive. There are so many behaviors that we can call intelligence, and the artificial systems are starting to be able to do some of those in useful ways. So that perception task, the ability to look at an image and say, that’s a cat, that’s a dog, that’s a tree etcetera,” yeah, I mean, that’s intelligence for that task, just like a human would be able to do that. Certain aspects of what we like to call intelligence in humans, computers can do, other aspects, absolutely not. So, we’ve got a long path to go, it’s not just a yes or a no, but it’s actually quite a complex space.

What is the state of the art? This has been something we’ve explored since 1955, so where are we in sixty-two year journey?

Sure, I think we had a lot of false starts, people kept trying to, sort of, jump start and kick start general intelligence—this idea that we can build Hal from 2001 and that he’d be like a human child or a human assistant. And unfortunately, between the fifth generation effort of the 1980’s and stuff that happened earlier, we’ve never really made a lot of progress. It’s been kind of like climbing a tree to get to the moon. Over the years there’s been this second thread, not the AGI artificial general intelligence, but a much more practical thread where people have been trying to figure out how do we build an algorithm that does certain tasks that we usually call intelligent.

The state of the art is that we’ve gotten really good at, what I call, one-step machine learning tasks—where you look at something and you classify it. So, here’s a piece of text, is it a happy tweet or a sad tweet? Here’s a job description, and information about somebody’s resume, do they match, do they not? Here’s an image, is there a car in this image or not? So these one-step links we’re getting very, very good at, thanks to the deep learning breakthroughs that Yann LeCun and Geoffrey Hinton and Yoshua and all of those guys have done over the last few years.

So, that’s the state of the art, and there’s really two answers to that, one is, what is the state of the art in terms of things that are bringing value to companies where they’re doing breakthrough things, and the other is the state of the art from a technology point of view, where’s the bleeding edge of the coolest new algorithms, independent of whether they’re actually being useful anywhere. So, we sort of have to ask that question in two different ways.

You know AI makes headlines anytime it beats a human at a new game, right? What do you think will be the next milestone that will make the popular media, AI did _______.”

AI made a better decision about how to address climate change and sea level rise in this city than the humans could have done alone, or AI helped people with precision medicine to figure out the right medicine for them based on their genetics and their history that wasn’t just one size fits all.

But I guess both of those are things that you could say are already being done. I mean, they’re already being done, there’s not a watershed moment, where Aha! Lee Sedol just got beaten by AlphaGo.” We already do some genetic customization, we can certainly test certain medications against certain genomic markers.

We can, but I think what hasn’t happened is the widespread democratization of AI. Bill Gates said, we’re going to have a computer on every desk.” I also think that Granny, who now uses a computer, will also be building little machine learners within a few years from now. And so when I talk about personalized medicine or I talk about a city doing climate change, those are all, kind of, that general umbrella—it’s not going to be just limited to the technologists. It’s a technology that’s going through this democratization cycle, where it becomes available and accessible in a much more widespread way to solve really difficult problems.

I guess that AIs are good at games because they’re a confined set of rules, and there’s an idea of a winner. Is that a useful way to walk around your enterprise and look for things you can apply AI to?

In part, I would say necessary, but not sufficient, right? So, a game, what is that? It’s a situation in which somebody’s taking an action and then based on that some competitor—maybe literally your competitor in a market—is taking some counter action, and then you take an action, and vice versa, right? So, thinking in terms of games, is actually a direction I see coming down the pike in the future, where these single-link AI systems are going to be integrated more and more with game theory. In fact, I’ve been talking to some large telecoms about this recently, where we are trying to, sort of, game out the future, right? Right now in AI, primarily, we’re looking at historical data from the past and trying to induce patterns that might be applicable to the future, but that’s a different view of the future than actually simulating something—I’ll take this action and you’ll take this other action. So, yes, the use of games has been very important in the history of AI, but again it’s not the whole picture. It does, as you say, tend to over-simplify things when we think in terms of games. When I map complex problems, it does kind of look like game moves that my customers take, but it is way more complex than a simple game of chess or checkers, or Go.

Do you find that the people who come to you say, I have this awesome data, what can AI teach me about it?” Or do they say, I have this problem, how do I solve it?” I mean, are they looking for a problem or looking to match the data that they have?

Both. By and large, by the time they make it to me, they have a big massive set of data, somebody on the team has heard about this AI thing, and they’ll come with a set of hypotheses—we think this data might be able to solve problem X or Y or Z. And that’s a great question, Byron, because that is how folks like me get introduced into projects, it’s because people have a vague notion as to how to use it, and it’s our job to crisp that up and to do that matching of the technology to the problem, so that they can get the best value out of this new technology.

And do you find that people are realistic in their expectations of where the technology is, or is it overhyped in the sense that you kind of have to reset some of their expectations?

Usually by the time they get to me, because I’m so practical, I don’t get the folks who have these giant general artificial intelligence goals. I get the folks who are like, I want to build a business and provide a lot of value, and how can I do that?” And from their point of view, often I can exceed their expectations actually because they think, Ah, I got to spend a year cleansing my data because the AI is only as good as the data”—well it turns out that’s not true and I can tell you why if you want to hear about it—they’ll say, you know, I need to have ten million rows of data because AI only works on large data sets,” it turns out that’s not necessarily true. So, actually, the technology, by and large, tends to exceed people’s expectations. Oh, and they think, I’ve been googling AI, and I need to learn all these algorithms, and we can’t have an AI project until I learn everything,” that’s also not true. With this technology, the inside of the box is like a Ferrari engine, right? But the outside of the box is like a steering wheel and two pedals, it’s not hard to use if you don’t get caught up in the details of the algorithms.

And are you referring to the various frameworks that are out there specifically?

Yeah, Theano, Torch, Google stuff like TensorFlow, all of those yes.

And how do you advise people in terms of evaluating those solutions?

It really depends on the problem. If I was to say there’s one piece of advice I almost always give, it’s to recognize that most of those frameworks have been built over the last few years by academics, and so they require a lot of work to get them going. I was getting one going about a year ago, and, you know, I’m a smart computer scientist and it took me six days to try to get it working. And, even then, just to have one deep learning run, it was this giant file and it was really hard to change, and it was hard to find the answers. Whereas, in contrast, I use this H2O package and R frontend to it, and I can run deep learning in one line of code there. So, I guess, my advice is to be discerning about the package, is it built for the PhD audience, or is it built, kind of, more for a business user audience, because there are a lot of differences. There very, very powerful, I mean, don’t get me wrong, TensorFlow, and those systems are hugely powerful, but often it’s power that you don’t need, and flexibility that you don’t need, and there’s just a tremendous amount of value you can get out of the low-hanging fruit of simple-to-use frameworks.

What are some guiding principles? There’s that one piece of advice, but what are some others? I have an enterprise, as you say, I’ve heard of this AI thing, I’m looking around, what should I be looking for?

Well, what you’re looking for is some pattern in your data that would predict something valuable. So, I’ll give you an example, I’m working with some educational institutions, they want to know, what topics that they offer in their courses will help students ultimately be successful in terms of landing a job. In the medical domain, what aspects of someone’s medical history would determine which of these five or six different drug regiments would be the most effective? In stock prices, what data about the securities we might invest in will tell us whether they’re going to go up or down? So, you see that pattern—you’ve always got some set of factors on one side, and then something you’re trying to predict, which if you could predict it well, would be valuable on the other side. That one pattern, if your listeners only listen to one thing, that’s the outside of the box. It’s really simple, it’s not that complicated. You’re just trying to get one set of data that predicts another set of data, and try to figure out if there would be some value there, then we would want to look into implementing an AI system. So that’s, kind of, thing number one I’d recommend, is to just have a look for that pattern in your business, see if you can find a use case or scenario in which that holds.

Switching gears a bit, you say that we had these early dreams of building a general intelligence, do you still think we’re going to build one sometime?

Maybe. I don’t like to get into those conversations because I think they’re really distracting. I think we’ve got so many hard problems, poverty, conflict—

An AGI would sure be helpful with those, wouldn’t it?

No. See that’s the problem, an AGI, it’s not aiming in the right direction, it’s ultimately going to be really distracting. We need to do the work, right? We need to go up the ladder, and the ladder starts with this single-link machine learning that we just talked about, you’ve got a pattern, you predict something. And then the next step is you try linking those up, you say, well if I’m going to have this feature in my new phone, then, let me predict how many people in a particular demographic will buy it, and then the next link is, given how many people will buy it, what price can I charge? And the next link is, how much price can I charge, how much money can I make? So it’s a chain of events that start with some action that you take, and ultimately lead to some outcome.

I’m solidly convinced, from a lot of things I’ve done over the thirty years I’ve been in AI, that we have to go through this phase, where we’re building these multi-linked systems that get from actions to outcomes, and that’ll maybe ultimately get us to what you might call, generalized AI, but we’re not there yet. We’re not even very good at the single-link systems, let alone multi-link and understanding feedback loops and complex dynamics, and unintended consequences and all of the things that start to emerge when you start trying to simulate the future with multi-link systems.

Well, let me ask the question a different way. Do you think that an AGI is an evolutionary result of a path we’re already on? Like, we’re at one percent and then we’ll be at two and then four, and eventually we’ll get there, or is that just a whole different beast, and you don’t just get there gradually, that’s an Aha!” kind of technology.

Yeah, I don’t know, that’s kind of a philosophical question, because even if I got to a full robot, we’d still have this question as to whether it was really conscious or intelligent. What I really think is important, is turn AI on its head, intelligence augmentation. What’s definitely going to happen is that humans are going to be working alongside intelligent systems. What was once a pencil, and once was a calculator, now is a computer is next going to be an AI? And just like computers have really super-powered our ability to write a document or have this podcast, right? They’re going to start also supercharging our ability to think through complex situations, and it’s going to be a side-by-side partnership for the foreseeable future, and perhaps indefinitely.

There’s a fair amount of fear in terms of what AI and automation in general will do to jobs. And, just to set up the question, there are often three different narratives. One is that, we’re about to enter this period where we’re going to have some portion of the population that is not able to add economic value and there’ll be, kind of, a permanent Great Depression. Then another view is that it will be far different than that, that every single thing a person can do, we’re going to build technology to do. And then there’s a third view that this is no different than any other transformative technology, people take it and use it to grow their own productivity, and everybody goes up a notch. What do you think, or a fourth choice, how do you see AI’s impact?

Well, I think multiple things are going to happen, we’re definitely seeing disruption in certain fields that AI is now able to do, but is it a different disruption than the introduction of the cotton gin or the automobile or any other technology disruption? Nah, it’s just got this kind of overlay of the robot apocalypse that makes it a little sexier to talk about. But, to me, it’s the same evolution we’ve always been going through as we build better and better tools to assist us with things. I’m not saying that’s not painful and I’m not saying that we won’t have displacement, but it’s not going to be a qualitatively different sort of shift in employment than we’ve seen before. I mean people have been predicting the end of employment because of automation for decades and decades. Future Shock, right? Alvin Toffler said that in the 60’s, and, AI is no different.

I think the other thing to say is we get into this hype-cycle because the vendors want you, as a journalist, to think it’s all really cool, then the journalists write about it and then there are more and more vendors, and we get really hyped about this, and I think it’s important to realize that we really are just in one-link AI right now—in terms of what’s widespread and what’s implemented and what’s useful, and where the hard implementation problems have been solved—so I would, sort of, tone down that side of things. From a jobs point of view, that means we’re not going to suddenly see this giant shift in jobs and automation, in fact I think AI is going to create many jobs. I wouldn’t say as many as we’ll lose, but I think there is a big opportunity for those fields. I hear about coal miners these days being retrained in IT, turns out that a lot of them seem to be really good, I’d love to train those other populations in how to be data scientists and machine learning people, I think there’s a great opportunity there.

Is there a shortage of talent in the field?

Absolutely, but, it’s not too hard to solve. The shortage of talent only comes when you think everybody has to understand these really complex PhD level frameworks. As the technology gets democratized, the ability to address the shortage of talent will become much easier. So we’re seeing one-click machine learning systems coming out, we’re seeing things like the AI labs that are coming out of places like Microsoft and Amazon. The technology is becoming something that lots of people can learn, as opposed to requiring this very esoteric, like, three computer science degrees like I have. And so, I think we’re going to start to see a decrease in that shortage in the near future.

All of the AI winters that happened in the past were all preceded by hype followed by unmet expectations, do you think we’re going to have another AI winter?

I think we’ll have an AI fall, but it won’t be a winter and here’s why—we’re seeing a level of substantive use cases for AI being deployed, especially in the enterprise, you know, widespread large businesses, at a level that never happened before. I was just talking to a guy earlier about the last AI hype cycle in the 80’s, where VLSI computer design by AI was this giant thing and the fifth generation,” and the Japanese and people were putting tens, hundreds of millions of dollars into these companies, and there was never any substance. There was no there” there, right? Nobody ever had deployed systems. AI and law, same thing, there’s been this AI and law effort for years and years and years, and it really never produced any commercial systems, for like a decade, and now we’re starting to see some commercial solidity there.

So, in terms of that Gartner hype-cycle, we’re entering the mass majority, but we are still seeing some hype, so there’ll be a correction. And we’ll probably get to where we can’t say AI anymore, and we’ll have to come up with some new name that we’re allowed to say, because for years you couldn’t say AI, you had to say data mining, right? And then I had to call myself an analytics consultant, and now it’s kind of cool I can call myself an AI person again. So the language will change, but it’s not going to be the frozen winter we saw before.

I wonder what term we’ll replace it with? I mean I hear people who avoid it are using, cognitive systems” and all of that, but it sounds just, kind of, like synonym substitution.

It is and that’s how it always goes, I’m evangelizing multi-link machine learning right now, I’m also testing decision intelligence. It’s kind of fun to be at the vanguard, where you can, as you’re inventing the new things, you get to name them, right? And you get to try to make everybody use that terminology. It’s in flux right now, there’s a time when we didn’t call e-mail e-mail,” right? It was computer mail.” So, I don’t know it hasn’t started to crystalize yet, it’s still in the twenty different new terminologies.

Eventually it will become just mail,” and the other will be, you know, snail mail.” It happens a lot, like, corn on the cob used to just be corn, and then canned corn came along so now we say corn on the cob, or cloth diapers… Well, anyway, it happens.

Walk me through some of the misconceptions that you come across in your day-to-day?

Sure. I think that the biggest mistake that I see is people get lost in algorithms or lost in data. So lost in algorithms, let’s say you’re listening to this and you say, Oh I’d like to be interested in AI,” and you go out and you google AI. The analogy, I think, is, imagine we’re the auto industry, and for the last thirty years, the only people in the auto industry had been inventing new kinds of engines, right? So you’re going to see the Wankel engine, and the four cylinder, you’re going to read about the carburetors, and it’s all been about the technology, right? And guess what, we don’t need five hundred different kinds of engines, right? So, if you go out and google it you’re going to be totally lost in hundreds of frameworks and engines and stuff. So the big misconception is that you somehow have to master engine building in order to drive the car, right? You don’t have to, but yet all the noise out there, I mean it’s not noise, it’s really great research, but from your point of view, someone who actually wants to use it for something valuable, it is kind of noise. So, I think one of the biggest mistakes people get into is they create a much higher barrier, they think they have to learn all this stuff in order to drive a car, which is not the case, it’s actually fairly simple technology to use. So, you need to talk to people like me who are, kind of, practitioners. Or, as you google, have a really discerning eye for the projects that worked and what the business value was, you know? And that applied side of things as opposed to the algorithm design.

Without naming company names or anything, tell me some projects that you worked on and how you looked at it and how you approached it and what was the outcome like, just walk me through a few use cases.

So I’ll rattle through a few of them and you can tell me which one to talk about, which one you think is the coolest—morphological hair comparison for the Colorado Bureau of Investigation, hazardous buried waste detection for the Department of Energy, DNA pattern recognition for the human genome project, stock price prediction, medical precision medicine prediction… It’s the coolest field, you get to do so much interesting work.

Well let’s start with the hair one.

Sure, so this was actually a few years back, it was during the OJ trials. The question was, you go out to a crime scene and there’s hairs and fibers that you pick up, the CSI guys, right? And then you also have hairs from your suspect. So you’ve got these two hairs, one from the crime scene, one from your suspect and if they match, that’s going to be some evidence that you’re guy was at the scene right? So how do you go about doing that, well, you take a microphotograph of the two of them. The human eye is pretty good at, sort of, looking at the two hairs and seeing if they match, we actually use a microscope that shows us both at the same time. But, AI can take it a step further. So, just like AI is, kind of, the go-to technology for breast cancer prediction and pap smear analysis and all of this micro-photography stuff, this project that I was on used AI to recognize if these two hairs came from the same guy or not? It’s a pretty neat project.

And so that was in the 90’s?

Yeah it was a while back.

And that would have been using techniques we still have today, or using older techniques?

Both, actually, that was a back-propagation neural network, and I’m not allowed to say back propagation, nor am I really allowed to say neural network, but the hidden secret is that all the great AI stuff still use back-propagation-like neural networks. So, it was the foundations of what we do today. Today we still use neural nets, they’re the main machine learning algorithm, but they’re deeper, they have more and more layers of artificial neurons. We still learn, we still change the weights of the simulated synapses on the networks, but we have a more sophisticated algorithm that does that. So, foundationally, it’s really the same thing, it hasn’t changed that much in so many years, we’re still artificial neural network centric in most of AI today.

Now let’s go to hazardous waste.

Sure, so this was for the Department of Energy. Again it was an imaging project, but here, the question was, you’ve got these buried drums of leaking chemical nerve gas, that’ve been dumped into these superfund sites, and it was really carelessly done. I mean, literally, trenches were dug and radioactive stuff was just dumped in them. And after a few years folks realized that wasn’t so smart, and so, then they took those sites and they passed these pretty cool sensors over them, like gravitometers, that detected micro-fluctuations in gravity, and ground-penetrating radar and other techniques that could sense what was underground—this was originally developed for the oil industry, actually, to find buried energy deposits—and you try to characterize where those things are. Where the neural net was good was in combining all those sensors from multiple modalities into a picture that was better than any one of the sensors.

And what technologies did that use?

Neural nets, same thing, back propagation.

At the beginning you made some references to some recent breakthroughs, but would you say that most of our techniques are things we’ve known about since the 60’s, we just didn’t have the computer horsepower to do it? Would that be fair to say or not?

It’s both, it’s the rocket engines plus the rocket fuel, right? I remember as a graduate student, I used to take over all the faculties computers at night when there was no security, I’d run my neural net training on forty different machines and then have them all RPC the data back to my machine. So, I had enough horsepower back then, but what we were missing was the modern deep-learning algorithms that allow us to get better performing systems out of that data, and out of those high-performance computing environments.

And now what about the human genome project, tell me about that project.

That was looking at DNA patterns, and trying to identify something called a ribosomal-binding site. If you saw that Star Trek episode where everybody turns into a lizard, there are these parts of our DNA that we don’t really know what they do between the parts that express themselves. This was a project nicely funded by a couple of funding agencies to detect these locations on a DNA strand.

Was that the one where everybody essentially accelerated their evolution and Picard was some kind of a nervous chimp of some kind, somebody else was a salamander?

Yes that’s right, remember it was Deanna Troi who turned into a salamander, I think. And she was expressing the introns, the stuff that was between the currently expressed genome. This was a project that tried to find the boundaries between the expressed and the unexpressed parts. Pretty neat science project, right?

Exactly. Tell me about the precision medicine one, was that a recent one?

Yeah, so the first three were kind of older. I’m Chief Scientist, also, at ehealthanalytics.net and they’ve taken on this medical trials project. It turns out that if you do a traditional medical trial, it’s very backward facing and you often have very homogenous data. In contrast, we’ve got a lot of medical devices that are spitting out data, like, I’m wearing my Fitbit right now and it’s got data about me, and, you know, we have more DNA information, and with all of that we can actually do better than traditional medical trials. So, that was a project I did for those guys. More recently we’re predicting failure in medical devices. That’s not as much precision medicine as precision analysis of medical devices, so that we can catch them in the field before they fail, and that’s obviously a really important thing to be able to do.

And so you’ve been at this for, you say, three decades.

Three decades, yeah. It was about 1984, when I built my first neural net.

Would you say that your job has changed over that time, or has it, in a way, not—you still look at the data, look at the approach, figure out what question you’re asking, figure out how to get an answer?

From that point of view, it’s really been the same. I think what has changed is, once I built the neural net—before, the accuracies and the false-positives and the false-negatives were kind of, eh, they weren’t really exciting results. Now, we see Microsoft, a couple of years ago, using neural network transfer, which was my big algorithm invention, to beat humans at visual pattern recognition. So, the error rates, just with the new deep learning algorithms, have plummeted, as I’m sure your other interviewee’s have told you about, but the process has been really the same.

And I’ll tell you what’s surprising, you’d think that things would have changed a lot, but there just hasn’t been a lot of people who drive the cars, right? Up until very recently, this field has really been dominated by people who build the engines. So, we’re just on the cusp. I look at SAP is a great example of this. SAP’s coming out with this big new Leonardo launch of its machine learning platform, and, they’re not trying to build new algorithms, right? SAP is partnering with Google and NVIDIA, and what they recognize is that the next big innovation is in the ability of connecting the algorithms to the applied problems, and just churning out one use case after another, that drives value for their customers. I would’ve liked to have seen us progress further along those lines over the last few years, but I guess just the performance wasn’t there and the interest wasn’t there. That’s what I’m excited about with this current period of excitement in AI, that we’ll finally start to have a bunch of people who drive the cars, right? Who use this technology in valuable ways to get from here to there to predict stock prices, to match people to the perfect job—that’s another project that I’m doing, for HR human resources—all these very practical things that have so much value. But yeah, it hasn’t really changed that much, but I hope it does, I hope we get better at software engineering for AI, because that’s really what’s just starting right now.

So, you, maybe, will become more of a car-driver—to use your analogy—in the future. Even somebody as steeped in it as you, it sounds like you would prefer to use higher-level tools that are just that much easier to use.

Yeah, and the reason is, we have plenty of algorithms, we’re totally saturated with new algorithms. The big desperate need that everybody has is, again, to democratize this and to make it useful, and to drive business value. You know, a friend of mine who just finished an AI project said on a ten million dollar project, we just upped our revenue by eighteen percent from this AI thing. That’s typical, and that’s huge, right? But yet everybody was doing it for the very first time, and he’s at a fairly large company, so, that’s where the big excitement is. I mean, I know it’s not as sexy as artificial general intelligence, but it’s really important to the human race, and that’s why I keep coming back to it.

You made a passing reference to image recognition and the leap forward we have there, how do you think it is that people do such a good job, I mean is it just all transferred learning after a while, do we just sort of get used to it, or do you think people do it in a different way than we got machines to do it?

In computer vision, there was a paper that came out last year that Yann LeCun was sending around that said that somebody was looking at the structure of the deep-learning vision networks and had found this really strong analogue to the multiple layers—what is it the lateral geniculate nucleus, I’m not a human vision person, but there’s these structures in the human vision system that are very analogous. So, it’s like this convergent evolution, that computers converge to the same way of recognizing images that it turns out the human brain does things.

Were we totally inspired by the human brain? Yes, to some extent. Back in the day when we’d go to the NIPS conference, half the people there were in neurophysiology, and half of us were computer modelers, more applied people, and so there was a tremendous amount of interplay between those two sides. But more recently, folks have just tried to get computers to see things, for self-driving cars and stuff, and we keep heading back to things that sort of look like the human vision system, I think that’s pretty interesting.

You know, I think the early optimism in AI—like the Dartmouth project where they thought they could do a bunch of stuff if they worked really hard on it for one summer—stemmed from a hope that, just like in Physics you had a few laws that explain everything, in electronics, in magnetism, it’s just a few laws. And the hope was that intelligence would just be three or four simple laws, we’ll figure them out and that’s all it’s going to be. I guess we’ve given up on that, or have we, we’re essentially brute forcing our way to everything, right?

Yeah, it’s sort of the submergent property, right? Like Conway’s Game of Life,” has these very complex emergent epiphenomenon from just a few simple rules. I, actually, haven’t given up on that, I just think we don’t quite have the substrate right yet. And again I keep going back to single-link learning versus multi-link. I think when we start to build multi-link systems that have complex dynamics that end up doing four-at-a-time simulation using piecewise backward machine learning based on historical data, I think we are going to see a bit of an explosion and start to see, kind of, this emergence happen. That’s the optimistic, non-practical side of me. I just think we’ve been focusing so much on certain low-hanging fruit problems, right? We had image recognition—because we had these great successes in medicine, even with the old algorithms, they were just so great at cancer recognition and images—and then Google was so smart with advertising, and then Netflix with the movies. But if you look at those successful use cases, there’s only like a dozen of them that have been super successful, and we’ve been really focused on these use cases that fit our hammer, we’ve been looking at nails, right? Because that’s the technology that we had. But I think multi-link systems will make a big difference going forward, and when we do that I think we might start to see this kind of explosion in what the systems can do, I’m still an optimist there.

There are those who think we really will have an explosion, literally, from it all.

Yeah, like the singularitists, yep.

It’s interesting that there are people, high profile individuals of unquestionable intelligence, who believe we are at the cusp of building something transformative, where do you think they err?

Well, I can really only speak to my own experience, I think there’s this hype thing, right? All the car companies want to show that they’re still relevant, so they hype the self-driving cars, and of course we’re not taking security, and other things into account, and we all kind of wanted to get jumping on that bandwagon. But, my experience is just very plebeian, you just got to do the work, you got to roll up your sleeves you got to condition your data, you got to go around the data science loop and then you need to go forward. I think people are really caught up in this prediction task, like, What can we predict, what will the AI tell us, what can we learn from the AI?” and I think we’re all caught up in the wrong question, that’s not the question. The question is, what can we do? What actions will we take that lead to which outcomes we care about, right? So, what should we do in this country, that’s struggling in conflict, to avoid the unintended consequences? What should we teach these students so that they have a good career? What actions can we take to mitigate against sea-level rise in our city?

Nobody is thinking in terms of actions that lead to outcomes, they’re thinking of data that leads to predictions. And again I think this comes from the very academic history of AI, where it was all about the idea factory and what can we conclude from this. And yeah, it’s great, that’s part of it, being able to say, here’s this image, here’s what we’re looking at, but to really be valuable for something it can’t be just recognizing an image, it has to be take some action that leads to some outcome. I think that’s what’s been missing and that’s what’s coming next.

Well that sounds like a great place to end our conversation.

Excellent.

I want to thank you so much, you’ve just gave us such a good overview of what we can do today, and how to go about doing it, and I thank you for taking the time.

Thank you Byron, I appreciate the time.

Byron explores issues around artificial intelligence and conscious computers in his upcoming book The Fourth Age, to be published in April by Atria, an imprint of Simon & Schuster. Pre-order a copy here.