Episode 108: A Conversation with Kirk Borne

Byron and Kirk Borne discuss the intersection between human nature and artificial intelligence.

:: ::

Guest

Dr. Kirk Borne is the Principal Data Scientist, an Executive Advisor, and the first Data Science Fellow at global technology and consulting firm Booz Allen Hamilton since 2015. He provides thought leadership, mentoring, and consulting activities in data science, machine learning, and AI across a wide variety of disciplines. Previously, he was Professor of Astrophysics and Computational Science at George Mason University for 12 years in the graduate and undergraduate data science programs. Prior to that, he spent nearly 20 years supporting data systems activities for NASA space science programs, including a role as NASA's Data Archive Project Scientist for the Hubble Space Telescope. Dr. Borne has degrees in physics (B.S., LSU) and astronomy (Ph.D., Caltech).

In 2016 he was elected Fellow of the International Astrostatistics Association for his lifelong contributions to big data research in astronomy. As a global speaker, he has given hundreds of invited talks worldwide, including keynote presentations at many data science, AI and analytics conferences. He is an active contributor on social media, where he has been named consistently among the top worldwide influencers in big data, data science, and AI since 2013. He was recently identified as the #1 digital influencer worldwide and the #2 most popular AI influencer in North America.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, and I'm Byron Reese. Today my guest is Kirk Borne. He is Principal Data Scientist and executive advisor at Booz Allen Hamilton. He holds a BS in Physics from Louisiana State and a PhD in Astronomy from Caltech. His background covers all kinds of things relating to data and data science and artificial intelligence so it should be a great conversation. Welcome to the show, Kirk.

Kirk Borne: Thank you Byron. It's great to be here.

So for the folks who aren't familiar with you and your work, can you give us a little bit of a history about how did you get here, what was the path you took?

Well as you mentioned my background is Astrophysics and Astronomy. Starting in grad school about 40 years ago, I was always working with data for scientific discovery either through modeling and simulation or data analysis. So that's sort of what I was doing as my avocation, which is research and astronomy, but my vocation became supporting NASA research scientists data systems -- so the data systems from various satellites that NASA had for studying the space/astronomy domain. I worked on those systems and provided access to those data for scientists worldwide. I did that for about 20 years and so I was always working with data, and I would say data is my day job; data is my night job as an astronomer.

And so it was about 20 years ago that we were starting to notice the data volumes of the experiments we were working with, were just becoming more off scale than ever imagined. I mean just one single dataset I still remember 1997 -- we were trying to work with this dataset that just by itself was more than double the size of the other 15,000 experiments we were working with combined. So that was like unheard of. And so at that point I started looking around at what can one do with data of this volume and I discovered machine learning and data mining. So I had never actually looked at data that way before. I just thought about analysis, not so much discovery from data from a machine learning perspective, and so that was 20 years ago and sort of fell in love with that whole mathematical process and the applications that come from that, which include AI. That's what I've been doing for the last two decades.

And so as a practitioner, what's the sort of work you're doing now?

Well for me personally it's really about, as my company likes to say, thought leadership. I feel kind of nervous when I say that about myself but I do a lot of public speaking, I write a lot of blogs. My title includes ‘executive advisor’, so I'm advising both internally our business managers around AI machine learning and data science, but also our clients. But at the same time I'm also doing sort of tutoring and mentoring to some of our younger data scientists because after my 20 years at NASA, I spent 12 years at George Mason University as a professor. I was Professor of Astrophysics, but I really was teaching data science; and so it's sort of in my blood I guess to be an educator, to teach, to train and so that's pretty much what I'm doing. I'm promoting the field, having conversations with people, for developing new ideas and concepts; not so much coding anymore like I used to do back when I was younger at NASA. I let the smart young coders today do all that work but we have lots of interesting conversations about which algorithms to use or developing. So it's really exploratory innovation at the frontier of all this stuff.

So before we launch into AI questions (I have a pile of them for you) I can't imagine there's an Astronomy PhD on the planet that doesn't have their own opinion about the Fermi Paradox. What is yours?

Oh well, I think that's a good question. But I think that right sort of response to that is the distance between stars is so enormous that it's really hard to imagine that if every star had planets that were teeming with life, even nearby stars, it would probably be still next to impossible to imagine any kind of encounter. Literally why would they go travel to some speck of dust that would take them literally hundreds of years? You might say the life spans would be different. Different planets, maybe, maybe not.

I mean all these things are tapered by you know that the conditions of star evolution and all kinds of things. So I can't imagine sort of chemical or biological processes being all that different. In fact they should not be different on other planets. And so I just think that the time travel and space travel challenges are so enormous that I just can't see it really happening. So I'm not sure if I can believe whether there was life teeming on every other planet in the universe or at least on a planet around each star in the universe. But know that it's completely possible.

So I only ask one follow up and then we can launch into AI. But you know we would be eager to go visit other stars. I mean you know in the ‘70s we sent out the Voyager probes and those were like “Hey everybody we're here.” Of course that too is you know, a bottle in an intergalactic large ocean, and so maybe there are alien Voyager probes floating around all over the place. But they're too sparsely separated to ever come out our way.

Now it's also considering the size of the thing. I mean we're detecting better and better than ever before asteroids in our solar system that are a few hundred meters in size. But our probes are not much bigger than a suitcase. So we're not paying any attention to those. In fact they really are just specks of dust, specks of noise in our data on... and there's literally hundreds of billions or trillions of such specks of dust in our own solar system. And we're more concerned with the big ones that might do damage to us. So we're just ignoring all of those things even if some of them, who knows, for all we know they could be alien probes...

Right we had that cigar shaped... So OK, the show is Voices in AI. So let's voice a little bit about AI. So let's start with the basics, how do you in your mind define intelligence and in what sense is artificial intelligence... is it artificial because we made it or it's artificial because it's like faux, it's not really intelligent, it's just faking it?

Probably all of those. So for me AI is really just the actionable output of what we learn from incoming sensor data. Okay so sensors measure things about the world, algorithms find patterns and trends in those readings. And then there's a response and action, a decision that comes from that. That's what humans do, that's what all animals do. Right? We have sensors, our eyes, our ears, our mouths, our fingers, our hands whatever we have we're sending our universe. And from what we sense that is patterns we recognize detect patterns and anomalies, that's what we're really good at.

Then we infer what would happen if I ignore this or not ignore this or do something with this thing that I'm seeing. And then based upon that sort of inference, we make a decision to do something or not do so. So our algorithms, human or any animal is a biological neural network. And so we're emulating that with an artificial [one].

So yes, it is artificial intelligence, but I'd like to say the things we're building are really... the purpose of them is not for the purpose of just building an artificial intelligence but it's to augment our intelligence. So I say the seven A's of AI are: augmented intelligence, assisted, amplified, accelerated, adaptable, actionable intelligence -- that's six probably. But anyway so I have seven A's of AI that basically say what we are really trying to do is augment and amplify and accelerate human intelligence by automating parts of this process -- especially the process of dealing with all the information flood that's coming into our sensors these days.

But in a couple of touch points there, you likened machine intelligence to human intelligence in terms of you mentioned neural nets that are trying to do something vaguely analogous to what the brain does and all that. But isn't machine intelligence something radically different not just in form, but like if you gave an AI all the data of planetary motion of the last 500 years, all the planets in our, all the bodies in our solar system, it could figure out when the next eclipse was going to be because it would just study it and it would make this assumption the future is like the past.

And it would do it but if you then said, “What would happen if the moon vanished? How would it change everything?” It would be like... (silence), so it doesn't really understand anything. Like you said it just finds patterns and makes predictions based on them but it doesn't understand why anything happens the way it does. So it couldn't be a perfect planetary model, but it wouldn't ever even intuit that something called gravity exists, right?

Well that's true. But if you think about ancient civilizations, they had no deeper intuition than that machine you just described. So if the moon vanished it would invoke all kinds of bizarre interpretations for that and even bizarre sort of outcomes -- like literally in the ancient times when there was an eclipse, you know people panicked. And if there was like a Royal Astronomer like in some of the ancient quartz kingdoms if that ancient astronomer had not predicted that eclipse, they usually lost their head.

You know maybe we should bring that back quite frankly.

Anyway. So I think the intuition that we have as humans today we've gained over millennia of human existence and so what we learn in schools, -- and I like to tell people you know hopefully a successful person spends a minimum of 12 years in school, doesn't drop out, and then hopefully beyond that there's either college or continuing education or certainly lifelong learning.

So we get to the point where we're actually employable and useful as an intelligent person in the workplace after literally decades of consuming information and knowledge. So our algorithms we're feeding ten thousand or ten million pictures of cats. You haven't gotten to scratch the surface of all the thousands and millions of different kinds of knowledge that humans just gathered through second by second, minute by minute, hour by hour interaction with their world over decades.

Right. But I guess I'm trying to poorly articulate something a little different. So you talked about AI being the system that has sensors to give it information. It turns that information into action. So we could imagine a cat food thing that whenever it's empty, whenever the weight of the bowl falls, it opens a shoot and refills itself. Which would be that: there's a sensor, there's some logic and then there's some action that comes out of it. So you would call that a rudimentary AI?

I would call that a robotic process.

That's interesting. But aren't like all computers by definition robotic processors?

Well I think you could imagine something a little bit more “intelligent.” For example, say the cat food bowl wasn't emptying at the same rate so that would be a pattern that would be noticed and maybe the reason is because the cat food is spoiled and so doing some kind of like sniff test or color test.

So right now we have computer vision. I'm sure it's not too far in the future we're going to have sensors that can sense smell in the same way a computer vision algorithm can look at patterns and images. We can sense it because what is a smell? Nothing but sort of molecular content of the air. So you can actually apply a sensor to sort of infer why you think this bowl is not emptying at the same rate that used to, the color looks different, it smells different, maybe it spoiled. And so you're starting to incorporate what a human would do, which is a lot of contextual information.

So I would say an AI first and foremost, shouldn't even get that label until it starts including these sort of cognitive functions where you're collecting contextual data; where you're seeing other things about the thing you're looking at. And that's sort of like an autonomous car. You can imagine an autonomous car being just a robotic process where it has a map of the road and it knows where the stop signs are and the speed limits. And it doesn't even have cameras on it. It just sets off and drives. Well obviously that's pretty dangerous because someone might walk in front of the car or if there's some children playing on the side of the street, we know to slow down as a human.

So the cameras, if you will, give the car that contextual information, more cognitive function if you want to call it that. And at that point it's not just an autonomous robotic process, it becomes in some sense cognitive. Now I don't want to say cognitive like a human being, like robots in the movies. I'm not going that far with it but I'm just saying it knows how to take context into your decision making.

Right. And I guess what I'm trying to get around is that the way that machine intelligence is different than human intelligence. Because when you were just describing all of that used a lot of words like “it knows” something, “it can smell” something, “it can see” something, and a computer doesn't know or smell or see anything, nor can it sense. It can measure and do these other things and it's perfectly all right.

I think to use these words... because we don't have up until a few years ago the idea of anything other than an animal seeing didn't make any sense. So we got these like words we're trying to retrofit into our modern technological world, but I guess what I'm trying to get at is: All of what a machine does is to my mind, so very different than what humans do. And yet it seems like you don't think maybe it is quite so much -- because you're drawing these analogs between people and machines. So how would you describe like how alien is machine intelligence to you?

I think we're not so far apart from what we're saying here. I think the comment I made earlier about I like to think of AI as having nothing artificial about it. It's really augmented, it's accelerated, amplified intelligence, so it's really what's important there is it's augmenting and accelerating the human process. And sometimes it might take it over and become a robotic process. But for the most part, it's accelerating the intelligence or amplifying the intelligence and outputs of a human. So it's really sort of an assistant or assisted intelligence I guess -- one of my other A's of AI. So it's assisting us. It's not necessarily taking on sentience, even though some people like to think of it that way.

I stick with the analogs to human intelligence because that's exactly how these algorithms are developed -- they’re bio inspired algorithms. I mean there's entire books written on bio inspired algorithms, like neural networks and genetic algorithms where we understand sort of the way a biological organism collects information about its world again through our eyes, ears, nose and other sensors. And it goes through a neural network, which we call the brain, with synapses firing and all of this good stuff, which is nothing more than an algorithm that says “when I see this pattern I need to do this.” So I see it like the first caveman who came out of the cave, sees an animal and so they're doing pattern detection... that now they have to do pattern recognition that will this animal eat me for lunch or will I eat [it]? Can I eat that animal? If you make that decision incorrectly you may not survive another day.

So anyway so what humans do, what animals do, we're understanding through neuroscience and we're trying to implement algorithms that imitate those types of behaviors because they are in a sense the analog of what we call intelligence.

Right. I mean I hear you. I've often cynically thought most of the biological analog are just marketing. I mean if you look at it through a different lens, we don't know how brains work. We don't know how thoughts are encoded. We don't know how we do transfer learning, how we learn something and apply it to other areas. We don't know how the mind emerges, which gives us things like creativity and emotions. We don't even have a good theory on how matter can experience the world.

And so I've often thought that all of these attempts, [that] draw those kinds of parallels are stretches that you take something we really don't understand, human intelligence. And then we take something that's in a sense crazy simple. I mean all AI is: is you take data about the past, you study it, you make predictions about the future and you're right, people do that. But that isn't... I don't know the essence of intelligence in the sense that, that only works when the future is very much like the past, like what a cat looks like or is that animal going to eat me.

The question like what am I going to say next may not be pattern matching in the same way, and so I often found those analogies to be strained. But that could just be me. Do you want to say anything else to that before I move on to my next question for you?

Well again I think we're in strong agreement here. I wouldn't go so far as to say those things [that] are machines are trying to do either you know feelings, emotion, creativity, even though some people claim there's some kind of creativity in an algorithm. I guess what I meant to say would be: those are symptoms of a living organism as opposed to intelligent function of an organism. So the intelligence function again is that pattern detection recognition and decision making as a function, feeling and emotion. It comes from the mind and the brain. I understand that, but it's a different function.

And sort of my analogy or so I'm with you, I don't think we should be saying AI is going to do all of those things. And we see movies with robots that are actually you know maybe developing feelings or something. It's like okay, that makes for nice entertainment on the big screen, but there is nothing in any of the work that I am doing with anybody that has anything close to resembling that kind of goal or objective in the research.

So let me throw a different one at you. I'm an optimist about the future. I mean anybody who listens to the show or reads my books knows I play my cards face up. I'm optimistic with the future and yet I can think of applications of this technology that are troublesome, and one of them in particular is in the notion of privacy, because in the past we all had privacy because there's so many of us you can't listen to every phone conversation, you can't follow every person.

And I guess with AI you can listen to every phone conversation and then model it, turn it into text, and then build sophisticated models that look for patterns. All the same techniques we use in other stuff and with cameras everywhere and facial recognition, you can follow everybody and you can read every email that's written and you can make sense of it all. And the temptation for a state to use that or any other aggregator of significant size to use that I suspect is overwhelmingly tempting. Do you share that concern and if so, do you have any suggestions for solutions to it?

Well I'm very similar to you again. It seems like we a lot of agreement here. I call myself oftentimes a Skeptical Optimist or an ‘optimistic skeptic.’ I'm not sure which way to say that but I'm primarily an optimist myself. So I don't necessarily ascribe to the apocalyptic visions of AI and all these things, but I do keep a good skeptical attitude toward the fact that other people are trying to develop things that might go out of control or violate our privacy or violate some ethical principles. I mean those things have already happened.

And so I realize that there are these danger zones and some groups have already stepped into those danger zones. Certainly we see things like fake news and other things and the thing that sort of worries me, certainly in the United States, is that some of our strongest adversaries across the world may not have the same ethical principles we have, and so they have no problem doing things that we would have a problem doing. And so there are technologies being developed that we ourselves wouldn't develop and deploy, but someone else might be developing and deploying against us.

So we have to be eyes open with this, and part of eyes open means we need to understand those technologies and part of the way to understand it is by building them I guess, and then seeing the failure points and where they can go wrong. So you can't really defend against something if you don't know how it works, so I do think we need to do some of these things, but we always need to have the principles guiding it and principles are not strong enough unless there's some kind of what I would say, an external review board.

So for example one of the sorts of points of view I'd like to take on AI, I use the expression “it's a grand experiment on humanity.” And so all human subjects research universities, any kind of human subjects research had to go through an internal review board to validate that there was no harm or potential to the participants, -- that there were benefits that would outweigh the dangers, even though that yes, research sometimes contains risk, but the benefits have to outweigh the risks; that there's equitable distribution of both benefits and risks.

You can't do like experimental drug trials on one population to benefit another, -- that kind of thing, which has happened in the distant past, not so distant even. So think about AI. There needs to be something equivalent to an independent review board, who will look at this and say, based upon some principle, this or that can't be done or should not be done. So I think we can't put our head in the sand and say “well all these things will go away if we don't pay attention to them.” But at the same time, we need to evaluate them as we go.

Well let me throw another challenge with the technology to you. There is a lot of people who have a knee jerk reaction to the use of artificial intelligence in war and warfare and specifically the use of AI to make independent kill decisions. And then other people come forth and they say, “Look, right now we drop bombs that just blow everything up or we have drones that they just blow everything up. Good people, bad people, all the rest because they're…” And yet now we have a technology you can add to that, that said OK, the drones only going to fly down and blow up [targets]. If it does facial recognition, finds that this is this person, this single person it's looking for. Isn't that "better," kind of by any view? Or how do you kind of sort through all that?

I think that more precision and all of that is critical. I think to start with ancient warfare, which was just basically just blowing everything up and count the pieces later or something like that, I think we are better than that. We're not talking about the ethics of war here, which is a different conversation, but whether you should or should not go after someone. But if that someone is like the person who masterminded the 9/11 attacks, I think people were not too disturbed by that, but it was a very targeted specific outcome. We didn't just blow up the compound and kill everybody there. There was one specific person they were after.

So I think the precision in warfare is a parallel to precision medicine where you target a particular gene or sequence in the gene for treating a particular patient with a particular disease that's very targeted very specific and very precise. Same thing with precision agriculture where one of the things I heard recently was a tractor company who has smart sensors on their tractor when they’re distributing fertilizer and weed killer. They literally can real time with their camera detect...

What kind of plant it is.

What kind of leaf it is. Yeah. What plant. What the leaf is. Is it a weed? Yes? It sprays the weed killer. If it's something that you want to grow, it spreads the fertilizer. So it's great cost savings not only in terms of the distribution itself, but also in terms of the productivity of the field; but also at the same time, all good for environment after he spreads stuff around you don't need to do so anyway, so that this kind of thing has amazing potential to make our decision making better.

And I still think... I don't have any inside knowledge, but I'm pretty sure that even when there are, for example, drones that are targeting a specific person, there is a commander, a human being who makes the final ‘Go’ decision. I mean the AI, the computer vision algorithm is doing the facial recognition to identify the person, but someone else is giving the command OK. But once the command is given. Yeah. Then the thing goes and tracks down that person that takes the action. But it's again a human commander that makes the decision.

Let me throw another one at you which is ‘explainability.’ So some people believe that if a guy makes a decision about you that affects your life, like whether you get a car loan or not, you have the right to know why that decision was made. Then other people say “Well that sounds good, but with AI, there often is no why. It's a model that just fits data to outcomes and there's no why, there's just this person looks more like people that didn't pay their loan than people who did pay their loan and it's no more complicated than that.” And so in some areas like in Europe, you have explainability ensconced in law and in other places like this country right now, you don't. Do you think that having a burden of explainability will impede the development of the technology in those places?

I think it could. I'm a big proponent of explainability. I think as a person myself who's worked on algorithms, even developed some maybe not as powerful as the ones that exist in the world today, but I've developed algorithms, and if you can't explain sort of why it's doing what it's doing...

But hold on a second. I mean if you’re Google and you've spent the last 20 years... and there are thousands of factors that go into something and there's 50 billion pages you've ranked, and then somebody calls you and says “Look, I'm in Akron and I have a pool supply company, and when you type ‘pool supply Akron’ I am number four and my competitor's number one, and that really affects my life. Why are they one and I'm four?” And I think Google as honestly as they could say it, would say “We have no idea, like that is an unknowable question: why they're one and you're four, out of 50 billion pages across a thousand different variables.” So how can you have explainability?

Well I think in that case, you can't explain a specific outcome, but you can explain what Google's doing through a page rank matrix inversion of a linear programming equation. And so the person may not understand those words, but it's mathematically understood. I think the thing that's worrisome for these deep neural networks, these deep learning algorithms where they have you know 20/30/40/50 hundreds of layers, and those layers are doing all these convolutions and combinations of the inputs, and it's some extraordinarily peculiar combination that leads to a decision...

For example let's just make up something here. Let's say your facial expression determines whether you should get a loan. I'm just making this up: let's say somebody has figured out what someone's facial expression is when they're telling a lie. That's probably not too far fetched to imagine. And so you go in to apply for a loan and as you're answering the questions on the form or in person or orally, they see that the machine detects that you're lying. Well how does it know that? It was the fact that you had your left eye up and your lip was curled and you had a warm brow and there was sweat on your palms. It's like, no...it was so some other thing that we can't explain. Well yeah, that gets a little worrisome now because it's something deeper in that algorithm.

But hold on just a second. Look, I want you to keep going. But a loan officer might say “I think he's lying.” How do you know? It's like I don't know. “I just feel like he's lying.” I'm not going to give him the loan and we accept that, don't we, or not? I wouldn't think we would hold computers to higher standards than we hold people to?

Yeah. But I think in this current society, we've reached a point that I don't think I would trust someone who would say that because how do you know that loan officer isn't carrying some cultural baggage? You don't know, but the algorithm is looking at facial features and sweaty palms and biometrics like that. Objectively. You know without any kind of cultural bias, hopefully. Of course, this depends how it's trained. I understand that the training set can affect that outcome, but if you run the same algorithm on a different person who's lying, it should come up with the same answer. And if you run it against a person who is not lying it should come up with the answer they're not lying. Whereas the human is probably more fallible if not necessarily intentionally but unintentionally. “I just think this person is lying because they're nervous.” Well maybe they're nervous because they just got a phone call from a family member who's got a very serious illness and it has nothing to do with the loan. So things that we pick up on as humans may not even really be relevant.

But I mean that cuts both ways. A computer would see that their hands are sweating because...

That's true. I agree. It's complicated but for explainability, I think is just from a scientific perspective I think it is essential in terms of application, it doesn't always have to be essential. For example, if I go to an e-commerce store and it's recommending a product to me. Yeah I can just ignore it. I don't like what I'm being recommended but it's not such a big deal that there's some kind of complicated algorithm that's figured out that I live in a certain zip code and I like to go to sporting games and I'm a scientist and I'm from Louisiana and I have a brother in Delaware. So all this information somehow made it decide to offer me this product for whatever reason I can't figure out, but I decided I'm not interested. I'm not too worried about how that algorithm figured out to offer me this. I choose to buy or not to buy.

But you just say from a scientific standpoint, explainability is essential. Explain that sentence.

Well I think that again, if the algorithm itself has to be... in a sense you have to be able to trace input to output, let's just put it that way. So if I put something in and something comes out the back end that I have a function y=fx where this function is some complicated thing, if I can't explain what that is, then that's not really science. It's sort of like, some people call that black magic. I would just call it a fishing expedition. That is I keep fishing for an algorithm, a function that produces an output that I like that is classifying the cats and dogs correctly. So I'm fine with it without even knowing that maybe it's going to do something else. Awkward.

That seems though, to be capping machine intelligence at human intelligence.

In some sense it is, in the sense that most of what we call machine learning nowadays at least in supervised learning and AI is an example of this, where we train data on instant thinking. Those labeled data are only as accurate as humans have labeled them.

So we ran a competition and in my company open competition called Data Science Bowl, which tens of thousands of people worldwide participated in. And we were detecting heart disease from scans of people's hearts; and it was able to detect, -- and it basically had an error rate of like 1%, in terms of detection of actual heart disease versus not heart disease. And that it could not possibly do better than that because that was the error rate of expert cardiologists, which provided the data set. Now we know that there was an error, because once people actually went back to those patients and did further lab work, they discovered that some of those that were labelled, or diagnosed one way actually had the opposite diagnosis. But the expert cardiologists who just had looked at the scans in the same way that the algorithms looked at the scans, had a 1% error rate and we got algorithms that matched that and it made no sense to get an algorithm better than that because of the training set had built in variants.

When you start fitting the variants, over fitting, that's one of the primes of data science, is if you start fitting the variants, the natural variants of the data. So in some sense, yeah our algorithms shouldn't exceed humans yet, but when it starts doing more things like reinforcement learning... so if you think about the AlphaGo that beat the human world’s best Go player, it taught itself how to play the best possible Go game by playing itself millions and millions of times. Reinforcement learning therefore, can exceed human intelligence in the context of reinforcement learning, which is: here's the rules to play by, and here's the goal or the outcome.

But the thing is, that goes back to explainability, because when AlphaGo beat Lee Sedol, and it made (I'm going to get the number wrong) Move 37, that just blew everybody away, and even by their own models, the AlphaGo people said that that was a move no human would have made. If you said “Why did you make that move Alpha Go?” there isn't really a why, I mean there is, but it isn't necessarily explainable to a human. So your best performing AIs aren't explainable.

I'm not disagreeing with that...

But you said that they're voodoo or magic or something and they're not science, if you can't explain why it happened that way.

I'm going to rephrase... what I intended to say, is that scientifically, we need to do that exploration and need to try and find the explanation for our model. I'm not saying that if it really is a complicated model that can't be explained... For example, financial markets. I always like to use this example with my students. I teach them forecasting techniques. ”Hey can I use this to forecast the stock market?” I say literally “you can't,” because there are millions and millions of factors worldwide; economic, political, social, financial, you name it, that lead to the outcome which is the current price of the stock. It's just not humanly possible to know what combinations of all those millions of worldwide events and factors and statements that people make in the news etc, led to that outcome.

But that's equivalent to the Akron [example], why am I number one and he's number four?

Yeah so at some point you say well, I understand it's a complex system, complex interactions, and that's okay scientifically because there's an entire science of complexity and chaos theory and that's understood as a scientific process. It doesn't mean I could say what the outcome will be is explainable in that sense, but I can explain that this is a complex process with millions of interactions that lead to these outcomes.

It's almost like, in astronomy there's chaotic orbits in planets, that never could be understood until someone actually applied chaos theory to solar system dynamics, and now we realize that we cannot predict the actual positions of the planets in the solar system out beyond maybe a billion years or so, because it's just not possible. And so yeah, we can't explain why that planet Jupiter, Mars, Earth, whatever is in a particular place if you come back 4 billion years from now and look at where the planets all are right, we can't explain that, but we can lowercase explain it, if not uppercase, in the sense that it's a chaotic complex system, where these outcomes happen, though they are not predictable and explainable in the uppercase explainable sense. So again it’s a scientific process of trying to understand why it did this, and if the why is because it's a complex chaotic system, then that still is a scientific explanation.

So by that, somebody can say, “Explain why the AI made that choice, and you say well, there are these machines that have ones and zeros...

Well go back to the Akron case. I still think it's the matrix version of the linear programming equation called page rank, and Google keeps the whole details of page rank proprietary, secretly.

But by their admission, it's words on the page and who's linking to you and what terms were in the story and what's your social ranking, and how long you've had your domain registered and a thousand other variables.

Financial forecasting with all these variables that go into it, but I think it's even, theirs is actually really mathematically explainable by belief.

Well Kirk, I've taken more of your time than I originally was going to. I thank you for your patience, and we could have gone on for longer, but I'll wrap up here and just ask you a final question which is: how can people keep up with you and what you're doing and whether you write or anything like that?

Well I'm very active on social media, primarily Twitter @KirkDBorne, but also on LinkedIn. I write a lot of blogs. I always post links to them on my Twitter page and try to do that frequently on my LinkedIn page, so see where I'm going what I'm talking about, what I'm learning. There, that's sort of what I live my sort of data science life out in the open, and I feel myself sort of educating the world on this topic, which I love doing. I love finding new and interesting things and sharing about it.

Well thank you for sharing with us, and you have a good day.

Thank you so much Byron. It was fun.