In this episode, Byron speaks with Monte Zweben about the nature of intelligence and the growth of real-time AI-based machine learning.
- Subscribe to Voices in AI
- iTunes
- Google Play
- Spotify
- Stitcher
- RSS
Guest
Monte Zweben is the CEO and co-founder of Splice Machine. A technology industry veteran, Monte’s early career was spent with the NASA Ames Research Center as the deputy chief of the artificial intelligence branch, where he won the prestigious Space Act Award for his work on the Space Shuttle program. Monte then founded and was the chairman and CEO of Red Pepper Software, a leading supply chain optimization company, which later merged with PeopleSoft, where he was VP and general manager, Manufacturing Business Unit. Then, Monte was the founder and CEO of Blue Martini Software – the leader in e-commerce and omni-channel marketing. Monte is also the co-author of Intelligent Scheduling, and has published articles in the Harvard Business Review and various computer science journals and conference proceedings. He was Chairman of Rocket Fuel Inc. and serves on the Dean’s Advisory Board for Carnegie Mellon University’s School of Computer Science.
Transcript
Byron Reese: This is Voices in AI brought to you by GigaOm and I'm Byron Reese. Today my guest is Monte Zweben. He is the CEO at Splice Machine in San Francisco. Before that, he was the Chairman of Rocket Fuel. He holds a B.S. in Computer Science from Carnegie Mellon and an M.S. in Computer Science from Stanford University. Welcome to the show, Monte.
Monte Zweben: Thank you, Byron. Nice to be here.
So let's start with the basics. When people ask you what AI is or what intelligence is, let's begin right there. How do you answer that question?
Well, I answer that question in a very abstract and simple way. Intelligence, artificial intelligence is really the ability for computers to perform tasks that typically humans do very well and computers, heretofore typically did not.
So by that definition, my cat food dish that refills when it's empty is artificial intelligence?
Well you know I think that simple automation perhaps can be looked at as artificial intelligence, but usually we're looking at tasks that typically are not just standard steps in an algorithm. Do step one, do step two, do step three...
But aren't all computer programs exactly that?
Not all, and that's where it gets interesting. Now, where it gets interesting is when computers are having to look at a variety of different data points and draw conclusions from them that might be generalizations. So I'll give you two examples of artificial intelligence that is very different than what we just talked about. One is machine learning. With machine learning, there is no clear cut decision of each individual step. Typically you're given a great deal of data and what you're trying to do is look at the data and try to best come up with a description of the concept you're trying to learn based on the positive examples of the data and the negative examples of the data, where you are including as many of the positive examples and excluding as many of the negative examples, but coming up with some description that has good coverage of those examples. That's very much probabilistic and uses techniques that are a little bit different than just concrete or discrete steps.
Another good example is reasoning tasks. One of the companies that I used to run was a company that helped manufacturers plan and schedule operations and to do that you have lots of choices of what you might manufacture on day one, day two, day three, day four. You have lots of orders that are coming from customers that are demanding different kinds of products and you have lots of inventory that go into products in their builds and materials in order to build things. And the puzzle is trying to figure out what's the best schedule to come up with that meets all of the customer demands and allows you to carry the least amount of inventory on hand to meet those requirements. And those are lots of different choices.
Humans are really good at trying to look ahead and see what are the different choices one might make, and search through a set of possibilities and make decisions sometimes even on the fly, and those kinds of applications are very different than computer programs that do the same thing every time. Sense, plan, act, are the kinds of processes that robotic systems use, planning systems use, scheduling systems use, space systems from when I used to run my lab at NASA. These are very different kinds of applications that try to reflect the decision making that humans do inside of computer programs.
So if somebody were to ask you, “OK given those definitions, where are we at? What is the state of the art for narrow AI right now?”
It's a great question, and I think some of the hype and news is a little bit ahead of perhaps where we are as a science. I remember back when I was beginning my AI studies in the early 80s, there was so much hype that we were going to have complete general artificial intelligence available in just a few years and it didn't happen.
And now we're seeing similar hype, and the hype is coming because of the very big success that we've had with the application of machine learning. Machine learning has stepped up to another level of contribution in many disciplines: precision medicine, marketing, planning, in fraud detection and other anti-crime types of applications. But the power of machine learning that we are seeing today has literally come about because of the power of distributed computing. The more data you can put to bear on a machine learning task, the better and more accurate your machine learning models are.
And we finally have figured out a way for organizations around the world to use many computers at once to deal with very, very large datasets. In the past, only specialists with PhDs in distributed systems were able to do that. Now just about anyone can do that. So that's broken new ground in being able to use empirical methods for machine learning.
However, some of the first generation artificial intelligence methods for being able to do reasoning and planning, some of these systems still haven't broken through, and I have to say that I think we're far away from having general artificial intelligence. We've made great progress and companies are deploying the beginnings of artificial intelligence into specific focused tasks, kind of like the first generation of expert systems. But we have not really come too long of a way to create a generic intelligence, in my humble opinion.
Your company is called Splice Machine. What do you do and why did you start it? What problem were you trying to solve?
Splice Machine was started based on the problems that my teammates and I have encountered in the previous ventures that we've been in, in deploying AI applications. Back at NASA in deploying systems for planning the repair and refurbishment of space shuttles, in the manufacturing, planning and scheduling company I talked about earlier at Red Pepper Software, later at Blue Martini Software where we were building omni channel marketing applications that take customers on individual customer journeys and the like, and even at Rocket Fuel, where we were optimizing media campaigns... in every one of these ventures, it was extraordinarily difficult to build these AI applications, because they required the duct taping of many different computation engines together to really deliver on the promise of operational AI, and operational AI that's being used every day really requires three fundamentally different computational workloads.
First, it requires the machine learning algorithms that you hear about every day, both deep learning as well as other machine learning techniques. But in order to use machine learning, you have to have very, very good data that's been prepared, cleansed and transformed into a way that the machine learning algorithms can make sense [of]. And that's typically analytical workloads where you're taking tons of raw data and preparing it for machine learning.
But to be truly operational, you have to have operational computing workloads that are running the applications. And until now that's been very separated from the other two components of operational AI. Splice Machine brings all three of these together into one integrated data platform. We have a sequel relational database management system that can literally run applications at petabyte scale, completely integrated with analytical workloads that can analyze data -- even in real-time on that operational data that I was just talking about; and instead of taking data out of the data platform and sending it to machine learning algorithms, what Splice Machine has done is taken the machine learning algorithms and put them in the data platform natively.
So now you can literally perform machine learning on data that's happening in the moment and that's how you can make operational AI applications truly happen. Perhaps I can bring this to life with a story. And one of the stories is about an insurance company, for example, that was intending to migrate their business application from managing clients, managing policies and managing claims to the cloud, to make them more agile; and in order to do that, they were looking at Splice Machine to take an old relational database management system that wasn't scaling and wouldn't be agile in the cloud and moving it to Splice Machine. But I was talking to the various leaders of the organization and asking whether it made sense for us to not just move the business system over to Splice Machine, [but] perhaps inject some AI directly into the business system. And they told me they've already built some AI-based machine learning models specifically for business processes like fraud detection and also underwriting risk.
And I said, ”That's great, how are those models?” And they said to me they've proven that those models can predict fraud better than any other human in the insurance company. And then I said. ”Well that's great, why isn't it being used?” And here's the key point, here's the punch line. They said that since there was separation from the operational systems that run the business from the systems that can predict fraud about the business, the time that it takes for the operational data to get to the AI-based models, the operational data changes many times. So the scores that predict fraud were wrong because of the latency of getting the data from the operational environment to the prediction environment.
So then I said, ”So what you're saying is that when we're live on Splice Machine, you'll be able to take the AI-based machine learning models and put them directly onto the real-time data running the business because Splice Machine has both components.” And they said “yes” and that is the punch line: you'll be able to literally predict fraud in real-time on the operational data with no latency between the operational systems and the analytical and artificial intelligence systems. And that's the essence of what's special about Splice Machine: we're bringing these three dimensions together to really deliver on true operational AI.
You know when somebody is looking around their enterprise for the kinds of problems that AI can solve right now, how do they do that?
Yes. So for what we do at Splice Machine, the best kinds of problems are the problems where first, there is enough data that can help you make predictions. If you have very, very little data, it's very hard to use machine learning to get signals and make accurate predictions. If you have more voluminous data you can get better and better models. So volume of data is important. Two, where real-time components are important, are great sweet spots for a Splice Machine, where it makes sense that you'd like to be able to train your models on data in the moment; where you'd like to be able to take a machine learning model and use it on data that might be very real-time.
Here's an example: all of us have experienced advertising on the web and often in poor examples of machine learning where there isn't tight linkage between the operational world and the A.I. world, you can get situations where you're literally shown an advertisement for something that you just bought online a few minutes ago. That's crazy. That's a real lost opportunity because you're not going to buy that product again.
What happened is that the system that's presenting the advertisement or the offer or the marketing message isn't real-time enough to know what's just happened in the moment. So for us, we think that places in the enterprise where you have operational data that is helping you determine what might be something really important for the company, where you might be detecting fraud or figuring out what the right message is for the client or perhaps even saving lives by sending nurses into hospital rooms for patients that are likely to be code bluing or are experiencing sepsis shock, -- these kinds of decision points where literally you can make decisions in a moment are great applications for what we do.
You mentioned the thing about ads. I remember I was getting some printer paper once on the big e-commerce site and it said, ”Do you want one ream or two reams?” and then it had a pallet. And I was like ‘you could buy a pallet of office paper?’ So I clicked on it to go look at this pallet of office paper and that was it, and now for like a month, everywhere I go they're trying to sell me this pallet of office paper. And I just looked at it like eight seconds and then I bought one ream bundle.
So anyway, tell me the size of enterprises that should be thinking about this technology, and I ask this because you probably heard the story about the Google programmer who is Japanese and his parents have a cucumber farm and his mom spends her days sorting these cucumbers by length and color and all that, and so he got an Arduino, and a Linux box and built something that sorts these cucumbers on four variables. He trained it, and I got to thinking if that's a valid use case (and maybe it isn't because I'm sure it was a passion project for this guy), but if that sort of thing can be done, this is clearly a technology that's accessible to smaller enterprises, but only to a point. Right? Like what's the range of enterprises you see that are are are deploying solutions that are material enough to offset the cost of developing them?
Yeah. This is a great question because it is truly a broad applicability. So, for example, one of the customers deploying Splice Machine is a credit card company who wanted to completely digitally transform their ability to handle disputes of credit card charges and they needed to have a few years of credit card transactions be immediately accessible to the fraud dispute application. This turned out to be a multiple petabyte type of problem and their existing technology couldn't handle it. And now they're live on Splice Machine where every transaction streams onto the Splice Machine data platform seconds after it happens in the world and is available for every bank around the world to access that data to process their disputes and that is, of course, a very large enterprise that is doing very mission-critical transactional workloads.
On the other hand, we're working with a very small startup that is in the healthcare arena that is pooling together data from multiple neurology clinics in order to build machine learning applications that will help neurologists take multi-dimensional data that until recently could not be processed by the individual doctors and building advisory applications that can help the doctors predict what's the best drug therapy and what is the patient's likely outcome given the trajectory of a disease. We've started doing these machine learning models for example, on multiple sclerosis data showing that you literally can build these predictive capabilities that can help doctors literally in their practice build better patient outcomes, as well as increasing the opportunities for their clinics because many of these multi-dimensional tests are reimbursable and these are also data pools that are attractive to the pharma companies for optimizing their clinical trial processes, as well as these payers to help create healthy populations.
So this little startup is building a population data platform in neurology that literally will help people be healthier and help pharma do better drug discovery and help payers keep populations healthier and there's literally just a few doctors in this beginning company that are building out this full network. So we do see great applicability because this kind of technology is elastic, and what that means is that when you begin, you might only have a few terabytes, but let's say you grow and you have many petabytes. Literally, since this is scaled out technology, meaning as you grow in volume you add more computers. It's easy for small companies to start small and grow big, and so we do see diversity in our customer base.
And then on the flip side, what are some of the pitfalls? What are some of the mistakes you see? Maybe they're mistakes in action or mistakes in how they think about it. But what are some of the pitfalls?
Well, there are always pitfalls in the application of machinery. And one of the things that I always evangelize is that machine learning, first off, it's a contact sport. You can't expect to do machine learning once and think that you've come up with a model that's predictive. It's an ongoing process. Your markets change, your domain changes and people who are making decisions in business practices are incredibly adaptable. But machines aren't. And so you have to keep feeding your models.
And what I like to talk about are feature factories, and what I mean by that is machine learning is only as good as the data you put into it. Algorithms are great, but the features that make up the data elements that machine learning algorithms use to train on, have the signal in them. And when the market changes, and the concepts that you're trying to learn change, you have to come up with new features. And we're still at a point where data scientists need to do that. They need to think about the data; they need to figure out how to change the raw data into data that might be usable by the algorithm.
As a simple example, when showing marketing messages to people, it's great that somebody might have been on a shoe site, but it's more important to know that they've been to a shoe site recently. And so being able to take raw data like where you visited on the web and turn that into recency data, can radically improve the effectiveness of messages that are being shown to you. And so that kind of idea of taking raw data and turning it into recency or frequency or even monetary value summaries of your history, is what a data scientist does every day.
So the pitfall is the company that's not doing that; the company that built the machine learning model deploys it and doesn't change it by monitoring it and seeing how well it's doing and continuously trying to improve upon it. I remember being at a data science conference and listening to the head of data science for Spotify, and what she said is that the biggest and most important thing for the company to adopt was a culture of experimentation. And so I think the number one learning is that if you're going to use AI and machine learning in particular, that you must set up a culture of experimentation and use software systems that enable you to rapidly modify the models you deploy and evaluate them.
Well, that is great advice and probably a good place to leave it, so Monte, why don't you close by telling people how they could find out more about your company?
Fantastic. Thank you very much. And to learn more about operational AI and what kind of software Splice Machine can bring to the enterprise both in the cloud and on-premise, you can go to www.SpliceMachine.com, and see some short videos of what we do and collect white papers. I look forward to speaking to the listeners about their problems in bringing operational AI to bear upon the enterprise and to really make the next generation of business outcomes that much better with the use of this new technology. Thank you very much, Byron.
Thank you, Monte.