Episode 63: A Conversation with Hillery Hunter

In this episode Byron and Hillery discuss AI, deep learning, power efficiency, and understanding the complexity of what AI does with the data it is fed.

:: ::

Guest

Hillery Hunter is an IBM Fellow and holds an MS and a PhD in electrical engineering from the University of Illinois Urbana-Champaign.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, I'm Byron Reese. Today, our guest is Hillery Hunter. She is an IBM Fellow, and she holds an MS and a PhD in electrical engineering from the University of Illinois Urbana-Champaign. Welcome to the show, Hillery.

Thank you it's such a pleasure to be here today, looking forward to this discussion, Byron.

So, I always like to start off with my Rorschach test question, which is: what is artificial intelligence, and why is it artificial?

You know that's a great question. My background is in hardware and in systems and in the actual compute substrate for AI. So one of the things I like to do is sort of demystify what AI is. There are certainly a lot of definitions out there, but I like to take people to the math that's actually happening in the background. So when we talk about AI today, especially in the popular press and such and people talk about the things that AI is doing, be it understanding medical stands or labelling people's pictures on a social media platform, or understanding speech or translating language, all those things that are considered core functions of AI today are actually deep learning, which means using many layered neural networks to solve a problem.

There's also other parts of AI though, that are much less discussed in popular press, which include knowledge and reasoning and creativity and all these other aspects. And you know the reality is where we are today with AI, is we're seeing a lot of productivity from the deep learning space and ultimately those are big math equations that are solved with lots of matrix math, and we're basically creating a big equation that matches in its parameters to a set of data that it was fed.

So, would you say though that that it is actually intelligent, or that it is emulating intelligence, or would you say there's no difference between those two things?

Yeah, so I'm really quite pragmatic as you just heard from me saying, “Okay, let's go talk about what the math is that's happening,” and right now where we're at with AI is relatively narrow capabilities. AI is good at doing things like classification or answering yes and no kind of questions on data that it was fed and so in some sense, it's mimicking intelligence in that it is taking in sort of human sensory data a computer can take in. What I mean by that is it can take in visual data or auditory data, people are even working on sensory data and things like that. But basically a computer can now take in things that we would consider sort of human process data, so visual things and auditory things, and make determinations as to what it thinks it is, but certainly far from something that's actually thinking and reasoning and showing intelligence.

Well, staying squarely in the practical realm, that approach, which is basically, let's look at the past and make guesses about the future, what is the limit of what that can do? I mean, for instance, is that approach going to master natural language for instance? Can you just feed a machine enough printed material and have it be able to converse? Like what are some things that model may not actually be able to do?

Yeah, you know it's interesting because there's a lot of debate. What are we doing today that's different from analytics? We had the big data era, and we talked about doing analytics on the data. What's new and what's different and why are we calling it AI now? To refer to your question from that direction, one of the things that AI models do, be it anything from a deep learning model to something that's more in the knowledge reasoning area, is that they're much better interpolators, they're much better able to predict on things that they've never seen before.

Classical rigid models that people programmed in computers, could answer “Oh, I've seen that thing before.” With deep learning and with more modern AI techniques, we are pushing forward into computers and models being able to guess on things that they haven't exactly seen before. And so in that sense there's a good amount of interpolation influx, whether or not and how AI pushes into forecasting on things well outside the bounds of what it's never seen before and moving AI models to be effective at types of data that are very different from what they've seen before, is the type of advancement that people are really pushing for at this point.

Why do you think it is that we can train people, even children, even little children, on just a few examples, even one example of something and they can generalize well. What do you think we're doing that we haven't mastered in machine speak? Is that just transferred learning, or are we pattern matching in ways that machines don't, or do you have any guesses on that?

Well you know, certainly learning from small amounts of data is one of the great challenges in that space right now and yes, you hit one of the terms on the head: transferred learning, being able to take knowledge from one domain and apply that ability and that reasoning process onto knowledge in the new domain, is one of the key things that people are working to tackle right now.

I think frankly, where in the early days of AI, that the deep learning field really took off in 2015 with some of the publications that happened that time around, [with] ImageNet. And dropping down into much more sophisticated, smaller types of models and learning from less data and things like that, has really increased in activity in the last one or two years. But despite AI having been in the popular press and movies for a couple of decades, the recent high-accuracy capabilities have only come about in the last few years, so we're really kind of at the cusp of what will be developed out.

Yeah, so in this transferred learning thing, if I said, “Hillery, I want you to imagine a trout swimming in a river, and I want you to imagine the same trout in formaldehyde in the laboratory. In what ways are they the same and in what ways are they different? Do they weigh the same?” and you'd say “yes,” or “Are they the same temperature?” “No.” “Are they the same color?” “Well, maybe slightly different,” and so forth.

And we do that so effortlessly. It's not like we say 'oh I know what a cat looks like, therefore I can figure out what a dog looks like.' But in a sense, is it possible that all of our intelligence is in one form or another just transferred from something else? I don't know. I just am baffled by it and intrigued by it that even a little child can do that whole thing with the trout, even with such a little bit of experience. Do you think we're going to crack how humans do that and instantiate that in a machine, or are machines fundamentally just going about it completely differently?

Well I'm going to take it in a slightly different vector there. One of the interesting things is if you look at the way humans do it versus the way a computer does it, in addition to the how much data did it take and how big a process did that data need to be, and how many different forms did that trout picture need to appear in order for the model to recognize it in a bunch of different circumstances—in addition to all that that, you're alluding to the question of, the power efficiency for example, of that computation.

What happens in our brains is incredibly low power compared to, you know, even if we had a computer model today that could recognize the trout in formaldehyde and the trout in the stream effectively, how much power consumption does it take? That's another area that we've shown some amazing results using for example, spiking neural networks with the True North and SyNAPSE programs, are trying to get much closer to human beings’ brain level of not only accuracy but also power consumption. But that's another area where I think that there's a lot of focus right now, in getting to mimicking human brain behavior, not just in accuracy or in situations, but also in what it costs to do that as well.

Right, so the most powerful computers in the world use 20 million watts and your brain uses 20 and the project you're referring to built something that was incredibly low like 10 watts or something... can you just talk a little bit about that?

Yeah, so the SyNAPSE program was aiming to do a brain inspired computing substrate that had very very low power consumption, specifically it was capable of 46 billion synapse operations per second per watt. But the total chip itself was 70 milliwatts, so it was a tiny amount of power for what it was able to do with spiking neural networks, sort of brain-inspired type of processing, and so these kind of things and sort of saying, “Today we take a big server with a bunch of GPUs in order to do calculations and compute also the scale down.” I think they’re an equally important part of the overall equation because otherwise we're going to be attempting to do AI and influencing businesses and functions within the world, but doing it at too high of a power consumption.

Yeah, the best guesses that I've read posit that 10% of all power consumption in the world right now is computers and I guess we would only expect that to go up if we didn’t make gains in efficiency right?

Yeah absolutely, as we try to tackle the world's large data, I mean, I like to say that what we're doing with AI really is we're trying to fulfill the promises of the big data era. As an industry within IT and computer science we said 'hey, there's big data, there's all this data out there in the world, and it's growing at exponential rate. It is coming in from sensors on the edge, it is being increasingly produced from handheld devices, mobile is driving data generation, as well as, companies are getting better at collecting data structured and unstructured.

So in this massive growth in data volume, we all know that story… but the big data promise really was to do analytics on that data, to get insight from that data, to monetize that data in a way that would result in better business processes and higher accuracy predictions of everything from credit card fraud to risk in business scenarios, and other things like that as well as things like patient outcomes in medicine, you can analyze patient history, right?

So we have these promises, and what we are doing largely with AI is to actually get to a place where we can mine structured and unstructured data at the same time, where we can get higher accuracy predictions out of unstructured data. Ultimately we can realize those promises of the big data era, not just say [that] there's a lot of data, but actually do something about it. That's a tremendous opportunity for the industry, and it's a tremendous opportunity for individuals in terms of, like I said, risk and fraud and patient outcomes, things like that. But it does, right now drive a new type of IT infrastructure. This GPU driven infrastructure is a place that a lot of folks are investing in to drive these insights. And that's a shift in IT in terms of investment, in terms of power consumption and other things like that.

You know, in a two year period, let's say we project forward 2 years, or back 2 years, the power of computers, thanks to Moore's Law, or the speed and price performance doubles and to your point, the amount of data we can collect and use in a good way that we can manage, goes up hugely say 10-fold or 15-fold or something. The third part of the puzzle which is our techniques—the alchemy that we use on those two things—is that improving at a similar rate? Or is it pretty static and the gains we're getting are, well there's more data and faster computers?

Yeah, it's interesting. IDC research has projected digital data growth at a CAGR of something like 42%, that was their prediction a couple of years ago. I don't know what the latest data is, but certainly staggering amounts of data growth. I think that there are a couple of areas that are being cracked into in terms of commensurate gains and accuracy and insight capabilities. One is that a lot of that growth was around unstructured data, voices and speech and images and things like that. And we have seen within the field of deep learning and AI, tremendous improvements in accuracy, to the point that processing ImageNet has become within these several digits of percentage gain for new types of models, just because there was such a huge gain that was accomplished with neural networks, versus classical techniques.

I think in speech processing, we're at a place where those fields are arguing about beating human accuracy and sort of what that threshold is. I think everyone is very happy with the tremendous improvements that have been had on unstructured data processing and we certainly see that even in structured data processing, that neural networks can beat out classic techniques like linear progression, logistic progression. And so I think that there have been pretty significant gains and if you talk about, does the gain in accuracy have to match the growth rate of the data, it usually turns out no, right? So from an enterprise perspective, especially as long as you can have a model that is in some cases even a tenth of a percentage point better than what was there before, that can translate to much better decision-making for a business, be it in process or manufacturing quality or other things like that.

When you think about a hurricane [that] is coming towards the land and you have to figure out what direction is it going to go. We all know that at some level we don't know. You can't predict that far out because of the complexity of the system, like in theory that's a deterministic system and it should be knowable, but, it isn't, you just end up making a guess. When you think about AI, and the right to know why the AI made the decision that it made, do you think we're in that sort of a case where at some point, these systems are going to become so complex, that notion of understanding ‘why’ is just impossible.

You know it's so interesting because this is one of the things that we're investing and publishing also publicly a lot of research publications on right now, which is this topic of explainability. You are absolutely right that there's been, frankly a lot of concern about the black box nature of some of these types of models, especially in the deep learning space that provide higher accuracy and better outcomes, but the question is, can they also be made explainable?

And there's fascinating things happening in the space Byron. I think that everything from, can we understand actually a sort of mimic type of a model that explains the prediction of a new model and helps to understand what's going on, to visualization techniques that help you get insight into particular pieces of data, how the model's kind of making its decision. I think it's to addressing bias, how do we ensure that models don't produce biased outcomes?

And you know we've recently been able to see that in addressing bias and models, that by attacking that, you know we're able to in a particular modeling area, reduce the bias by nearly an order of magnitude, and so, I think we're seeing that you can apply mathematical techniques to the data, to the models, to the understanding of the models, and try to make them more fundamental. There's certainly a lot of people that are just throwing on layers and functions and all these other things into their AI without thinking about that stuff. But I think we're seeing that, if you think about it and plan for it, that explainability, bias, aspects of models, those things can be addressed.

How do you think that's all going to shake out, because there are some things that need to have a high degree of transparency like medical diagnosis, but then there are things that don't really need any, like a restaurant recommendation, right? How do you think that's going to shake out, and do you think that's going to be through regulation, or just the evolution of custom or what? Who's going to decide, 'well this doesn't really need to be explainable,' or is that a market force, or?

You know it's exactly the right question from the perspective of AI maturing, AI becoming more robust and AI becoming more trustable. A number of the cases that you mentioned, like in a restaurant reservation etc, really kind of consumer facing, we're talking about understanding speech for example, there's no financial or ethical or moral loss to the AI making in this prediction or something like that, maybe some frustration, but…except for consumer satisfaction issues.

But from the perspective of AI becoming something that drives productivity, that drives insight in business data and things like that, I think there's a number of different forces. There are some things that we have started around conferences, potential standards around AI in terms of ethics and bias and things like that. I think we'll certainly see companies investing in those kind of things, so that the consumer knows what standards this AI intended to uphold.

Also things around explainability in some cases will come into place because of regulation, like you said, in medicine certainly it's important to follow regulatory, as well as privacy policies. But also in other spaces, in finance and insurance, there's other forms of regulation and such that it can be equally as significant to be able to answer and be accountable to those kind of regulations. So I think across a broad variety of enterprise spaces, there are things that can be tackled in order to either move from a situation where it has to do with consumer frustration, into a situation where it's having to deal with business outcome and business decision.

Do you think it's possible though that—and the entire issue is kind of interesting for people in AI to write about and think about—but that in the end people don't care? I would give the example of your credit score, that's a number that affects your life that has total opacity about it. I have no idea how they came up with that number, and they're not even required to tell me how they came up with that number, and that number kind of governs so many things in your life and yet we just kind of define 'that's just the number,' that's just the score You think in the end that just might be how this shakes out, that we'll just have opacity?

I think that we're seeing that it's possible to provide better than that, and, I suspect that it's likely that better than that will be required for a lot of issues around like we said, when we're into bias and ethics, and things like that, it's hard to imagine that those things, shouldn't be tackled in a very open way, and so, I think that certain areas will drive continued investment into removing that opacity, as you were calling it.

So can you talk a little bit about yourself and your professional journey, and what an IBM fellow is and that whole thing... tell us about you.

Absolutely, so an IBM fellow is IBM's highest technical honor, but it is also a role, it's a job. And there are currently around 100 give or take, who are with IBM currently. There's a handful that are appointed every year, so it's a thrill, it's an honor, and it's really a pleasure to be in this kind of role.

My journey started after my PhD as you mentioned at University of Illinois. I joined IBM research and over the time that I've been with IBM, I've had a variety of roles, moving increasingly up the stack as we would say. We think of a system as starting down at the bits and the bytes down in the hardware, and moving up into the software and the applications and the cloud services and things like that, so, over my career I've essentially worked up from the level of the bits and the bytes that I first worked on into other things around processor design and system design, ultimately invested a lot of time in memory technology.

I now work for the last several years in AI, and the team that I get to work with is focused on creating systems solutions that are sort of the compute engine behind creating AI capabilities. So a lot of my career was spent in memory and that's fundamentally a big piece of the data side of the whole thing, so AI being the convergence of data and access to data and cleansing of data and movement of data in a system with the compute power and the compute proof rate of specialized compute engines, has kind of come together for me.

Now I get to work on how to bring those things together, how do we design systems that have high throughput for data movement. We have a close partnership with Nvidia, and we implemented their proprietary NVLink technology on our POWER9 servers, and we're putting things out there from the research division that are optimized software packages and such to exploit the hardware and the software and help data scientists create AI capability with rapid turnaround time, and be able to study a lot more models and be able to study more data, and ultimately be able to get higher accuracy AI solutions out of their research efforts.

You wrote somewhere that IBM expects the role to be really multi-disciplined like, this department does this, and this group is building this, and this group is doing this and then part of your charter is to figure out how all of that stuff can be combined in a new way. Is that correct, am I getting that right?

Yeah, you're getting it right and you probably caught some of it in my answers to your questions. I work on a lot of things and my team works on a lot of things that are about class staff optimization, and you know we sit in the research division, but I work very closely with our development organizations. So that's another element of [being] interdisciplinary, in terms of the technologies, hardware, software, AI, combined together, and then interdisciplinary in some sense across the business.

We work in things for example where we create a research technology. We work to benchmark it in some ways, so people can understand the quality or the order of magnitude of improvement that we've gotten, and we'll go after getting 30, 40 even 50x improvement as well to compute time for previously published AI problems. But we don't stop at the point of having just a research publication, we partner with our development organization and transfer that over. They put it out in technology preview for people to try, and then ultimately turn that into product.

So that's sort of another dimension of the interdisciplinary nature is the inter-business unit role that I play in trying to help our business and development organizations pluck out promising technologies from research, and vice versa to help our researchers see how to tackle problems with our clients, that are going to make a difference. You talked about opacity of models or we deal with folks that are wanting to tackle bigger problems than their current compute infrastructure will support and so I spend time as well with our clients to try to help guide and steer our research in directions that's actually going to make a difference.

I would say the vast majority of guests on the show who are working in the field had software backgrounds. Your PhD is in electrical engineering. Do you think that that helps you when you're thinking about software solutions, is it the same kind of thing or it doesn't really map in any useful way?

Now I think it absolutely maps and part of it is my own personality, but I think a lot of it is my electrical engineering background and training. I remember my brother telling me, when I wasn't really sure if engineering was what I wanted to do with the rest of my life, my brother gave me the advice when I was in high school. He said to me, “Go do electrical engineering, it's a degree in logic, and you'll need logic for anything that you do, even if you want to stay in engineering.” And I think that was such a great piece of advice, because, coming from a hardware space, we're dealing with very very complex structures and systems, billions of transistors on a processor, a terabyte of memory, petabytes of storage, we're dealing in big complex intricate systems and you really have to think logically about what the interactions of all the components are.

And so myself and the members on my team that come from more of a hardware background, I think that we tend to look at AI from the perspective of what's actually happening under the covers. Again in the way I was answering a lot of your questions, I tend not to go into the philosophical, but a little bit more down to what’s the math that's happening, and from that, can we explain or reason about it, can we understand how to make it go faster, change the outcomes, can we statistically manipulate the data or the outcome or things like that, right?

So, I think we tend to take a very grounded and practical approach to trying to understand what's actually happening under the covers. I think you need a balance, absolutely, you need the people that are out there doing the theory, coming up with the new applications, and the totally new types of AI, but then you need to actually make it implementable, practical in runtime, other things like that and so my team sits there and tries to draw that bridge.

One of my very best friends in college was an electrical engineering major and he said anytime he went home for the holidays, all his relatives would have him like look at their ceiling fan that wasn't working anymore and they always just assumed he could fix appliances, does that ever happen to you?

Oh it certainly does and I'm no good at fixing appliances, so... Yeah, I can certainly relate to your friend there.

So can you talk about a project that [was] a meaty problem that you've tackled and used artificial intelligence to solve, or one that you worked closely in. If you're the pragmatist, give me like a real world win that you've been involved in recently.

Yeah so one of the things I love to talk about is actually a set of results that we published last year that is all coming together for us in certain ways. There was a set of work that my team did last year, that really took a village to do this. What it was specifically was that we wanted to tackle training of ImageNet 22k so training of a massive data set of millions of images that had 22,000 different classes, a really complex problem, but tackle that in a way so as to bring training time down into the domain of hours instead of days and weeks and things like that.

Because when you think about training of medical diagnostics or information on damage to properties or things like that, there are certainly things that have clear outcomes for people that are going to be related to processing of visual data with AI. And the approach that we took was to marry IBM's heritage from high-performance computing with then AI, and to draw that bridge between HPC and AI with something that we worked on for quite some time.

We found that it meant using systems that were designed with this high bandwidth link between CPUs and GPUs, it meant implementing new software. We wrote a new high-performance communication library to keep all of the learners in the system in sync with each other, and then it meant things around system engineering, you know getting a large system—in that case we did it with 256 GPUs—but getting a large system all working together robustly was also a systems engineering task when we started it.

And what we're able to do by bringing together all those different disciplines and leveraging tremendous IBM capabilities around everything from high-performance system design, to NVLink communication but applying it and doing it differently because of AI and those are different than classical high-performance computing, we were able to beat records. We were able to train in neural network to the highest published accuracy for that large ImageNet 22k classification problem. We were able to turn it at higher efficiency in the system, communication overheads, and it’s been published and so, we were able to bring together all these different disciplines and I think that was one of the things that embodied what we're trying to do in doing hardware and software platformization with systems, specifically to AI.

You know I have this theory that if we didn't have any more advances in the technology, if computers didn't get any faster and we didn't collect data any better, there's probably 20 years worth of stuff we could do with just the technology [we have today], applying it to all of these problems that enterprises have today.

I guess the reason we're not doing that is fundamentally, it's a talent shortage. Would you agree to that assumption? And do you think that's going to rectify itself and that we're going to have a flood of people who think in that math that you're talking about, or is that just the new normal, [that] there's always going to be vastly more projects that we can envision than we have resources to do?

You know it's a fascinating question. I don't know that I have a real well-formed opinion on it, but to throw out a couple of thoughts, right, I think that we are seeing tremendous growth in the training rates of people come out of computer science. Computer science enrollment, with some data that I've seen, look to be above where they were back in the dot com bubble era, which is encouraging.

Enrollment in things like machine learning classes are through the roof and the ability of online delivery methods like Coursera and open classes from the major schools I think are really helping people, whether or not they're already in the industry. They’re in the transition to have this skill set whether or not they're in school still. I think there's been tremendous improvement in rates of education and with people along the lines of machine learning and deep learning and stuff like that. I do agree that there's a tremendous amount to be done and a tremendous opportunity in applying even the known techniques to new data sets.

I mean we're seeing an absolute blossoming. I threw that out there earlier, but we have a recent publication on the use of deep learning to predict waves in the ocean, which we can imagine has impact on everything from routes for ships to outcomes of aquaculture and farming and things like that right? So, that's a completely new type of data set that we had a team and went looked at it and was able to apply AI to it, and certainly there's data all over the place like that, that you can envision people applying even existing AI techniques or tweaking them, but using current compute capabilities and such.

But I think as we work at the amount of opportunity in data, we certainly want to continue to improve the compute substrate, and I think that we're seeing that the game play, the initial accelerators around GPUs that people have done and the more art-of-the-possible type of capabilities that are starting to come out around more neuromorphic type of computing, are certainly indicating that we'll be able to continue to advance and improve compute as people branch out into these additional types of data that they want to explore.

AI has a long history and IBM with it of gains. Claude Shannon wrote in 1949 or 50 about how a computer could play chess, you had DeepBlue beating Kasparov in 1997, Tim Jennings and Watson famously, Alpha Go, you have AI playing poker and bluffing and all of that, and I guess games work well with AI because they're a constrained rule and a point is a thing and all of that.

What do you think is going to be the next watershed moment? What's something that AI's going to do not just gradually—if there's going to be something that happens that is a step function up into like, 'wow' the way that Go was? Even if you don't play Go that was still a big deal. Can you think of the next thing that's going to capture the popular imagination?

You know I think that it's interesting because we always look at these things that are sort of gaming oriented, and this is again more of a personal thing, I personally think that there's really interesting things starting to happen that have to do more with affecting outcomes of people's lives, or impacting things in the developing world, right?

An app that helps a farmer in a developing world, diagnose through a photograph, what may be going wrong with their crop, so that they can know how to treat it with the right application of whatever needs to be done. An app that enables diagnosis of a skin lesion in a remote village and tell that person whether or not they need to find their way to a medical center. The same thing in developed countries… things that have better outcomes in determining treatments for certain types of aggressive cancers because the history is now digitally accessible as to what was useful with other people in similar circumstances previously.

So I think that I personally feel like there's a lot of interesting things going on and starting to have more human outcomes, in some of the AI. I think it's an interesting shift within the AI community also, to start to look at these things that have a little bit more public of a human outcome, as well, and so I'm personally very excited about some of those things that are happening in that space.

Anybody who listens to the show knows I'm an optimist about the future. I believe that AI is the power to make better decisions, power to actually be smarter, but there are three or four applications of it that a pragmatist such as yourself, of course probably thinks about. And one is that, for the longest time we all have been able to have privacy intact because, you know no matter where you are Big Brother can't watch everybody at once. And with this technology, when it can read every email and understand every phone conversation, even read lips which it can do very well now, which would turn any camera essentially into a listening device.

The sorts of big data tools that we developed to do all kinds of the good things you were just talking about with the technology can also be used, essentially to invade everyone's privacy and track and build profiles on [them]... Is that a legitimate concern about the application of the technology, and is there even a potential answer to that or is it like, 'oh no that genie's out of the bottle'?

You know it's interesting, I think that we're really trying to take a proactive approach to a lot of deep learning and without taking any kind of corporate stance on it, but even just personal observation, there seems to have been a bit of a shift in that position of priorities. We look at the digital native generation, but I think another aspect of that is sort of who has my data, what is being used for?

What is the provenance of these decisions that are being made and things like that and so, one of the things that we recently have been talking about is the use of blockchain for example. How does that relate into AI which is probably not where you expected me to go but, but to sort of thread that for a minute... So there's a question of, you know, who's using my data and how and such like that, but if you think about the ability of blockchain to provide a capability to understand who has what data, how is it being used and to track things…

We have started to demonstrate use cases recently, that intersect the two, and so you can imagine understanding the flow of information as it relates to something having been characterized by an AI model and the use of personal information, other things like that, and so I think that there are technology mitigations and such, that can be put in place, especially from an enterprise perspective and we can leverage things from other spaces, obfuscation of personal data whilst still being able to create a valid AI outcome. Tracking of how data is being used through things that are like blockchain, other technologies like that, so I think there are certainly mitigation actions that can be put in place.

Right, I am with you. So when you have all of these applications you were talking about empowering people, giving everybody access to diagnostic tools or a county extension agent, all of the amazing applications you were talking about, do you think that we are underway for those to happen? Or do you think market forces are going to eventually produce all of that stuff that you're talking about, that change these outcomes for people around the world?

Yeah, I think we're absolutely underway with those things and you know, there are many ways in which we're trying to influence those kinds of things in the sense of also putting out tooling and such for people to be able to create their own solutions. So we just put out something called PowerAI Vision, which is a pipeline for someone who doesn't necessarily have the incredibly deep PhD level data science experience, to be able to create a solution where they're in the visual domain, based on their data, it's the ‘download and go’ kind of environment, great theory, all that other kind of stuff.

People can create AI models that are optimized for their unique data in order to create these kind of outcomes. We see folks engaging with these higher level tools and such to create in some cases pretty unexpected and wonderful solutions across different spaces. So from that perspective I think that some of the things that we're putting out, and from [the] perspective of cloud-based capabilities such as Watson Studio that enable people to develop their AI function, the PowerAI Vision tooling that I just mentioned, you put those things out and you put them in people's hands and they're able to customize to determine everything from ripeness of a banana to the fungus on a leaf or things like that. And that really equipped people to create their own solutions and do these things like tackle some of the field agent functions that we were talking about.

And do you think… you've heard this story about the cucumber sorting AI machine?

No, that particular one, but I can imagine...

Well this AI researcher had his parents in Japan [who] grew cucumbers and his mother would spend all day sorting them based on 4 factors: size, color, bumpiness and... I forgot the other one... And using a Raspberry Pi and an analytics box, and Arduino built a device that would sort them and do it all automatically. Of course that's a special case because this is somebody who's deep in those tools.

Do you think we're going to get to a case... because it sounds like [in] the world you're talking about are the tools to actually do things like that. Are they going to be as easy as writing a macro in excel or something? That it will be accessible to not only non-data scientists, but non-programmers, do you think we're heading in that direction?

Yeah I think we are. As I mentioned, one thing that I'd refer you to take a look at is this new thing that we put out called PowerAI Vision, it certainly is intended for a subject matter expert, someone that is familiar with some particular visual domain, and what the outcome should be, but it enables them to create a deep learning model without having either coding or deep learning expertise. And so there's still a subject matter expert—someone that's able to say, that’s a fungus of this type or something, or that's a cucumber of this level of ripeness... But so there's a subject matter expert in the loop, but not a deep learning expert, so in the case of the cucumber example you were just talking about, I do recall that was from a year and a half or two years ago now that you mention it, but in that case, the person knew what the metrics were that they were looking for, so they knew what the sorting was supposed to be, but they wouldn't necessarily, with for example this PowerAI Vision toolkit, need to understand how to create a particular learning and all that to target those things. They would have higher or quicker level of GUI in order to be able to create a neural network, given that they know what the features are that they're trying to look for.

Well, we are out of time on the show. I want to thank you... if people want to follow and keep up with what you're doing Hillery, how do they do that?

Absolutely, my twitter handle is @hilleryhunter and folks who are willing to follow me there... that's probably the best way to keep in touch.

Well thank you so much for taking the time, it was a lot of fun.

It was a pleasure talking to you, thank you so much.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.