Episode 66: A Conversation with Steve Ritter

In this episode Byron and Steve talk about the future of AGI, how AI will effect jobs, security, warfare, and privacy.

:: ::

Guest

Steve Ritter holds a B.S. in Cognitive Science, Computer Science and Economics from UC San Diego and is currently the CTO of Mitek.

Transcript

Byron Reese: This is Voices in AI, brought to you by GigaOm, I'm Byron Reese, and today our guest is Steve Ritter. He is the CTO of Mitek. He holds a Bachelor of Science in Cognitive Science, Computer Science and Economics from UC San Diego. Welcome to the show Steve.

Steve Ritter: Thanks a lot Byron, thanks for having me.

So tell me, what were you thinking way back in the '80s when you said, "I'm going to study computers and brains"? What was going on in your teenage brain?

That's a great question. So first off I started off with a Computer Science degree and I was exposed to the concepts of the early stages of machine learning and cognitive science through classes that forced me to deal with languages like LISP etc., and at the same time the University of California, San Diego was opening up their very first department dedicated to cognitive science. So I was just close to finishing up my Computer Science degree, and I decided to add Cognitive Science into it as well, simply because I was just really amazed and enthralled with the scope of what Cognitive Science was trying to cover. There was obviously the computational side, then the developmental psychology side, and then neuroscience, all combined to solve a host of different problems. You had so many researchers in that area that were applying it in many different ways, and I just found it fascinating, so I had to do it.

So, there's human intelligence, or organic intelligence, or whatever you want to call it, there's what we have, and then there's artificial intelligence. In what ways are those things alike and in what ways are they not?

That's a great question. I think it's actually something that trips a lot of people up today when they hear about AI, and we might use the term, artificial basic intelligence, or general intelligence, as opposed to artificial intelligence. So a big difference is, on one hand we're studying the brain and we're trying to understand how the brain is organized to solve problems and from that derive architectures that we might use to solve other problems. It's not necessarily the case that we're trying to create a general intelligence or a consciousness, but we're just trying to learn new ways to solve problems. So I really like the concept of neural inspired architectures, and that sort of thing. And that's really the area that I've been focused on over the past 25 years, is really how can we apply these learning architectures to solve important business problems.

So, give me an example of where biology at least points the way to an answer of some kind.

Well, the whole concept of a neural network, and that a brain has the ability to learn through experience, without rules being input into the system, it can learn how to adapt and how to deal with new situations, so, likewise, we see a big distinction between rule-based, programmed systems, and learning based systems. Where the rule-based systems require a human expert to really understand a space and be able to encode all of those requirements into a set of coded rules, where a machine learning system, is instead learning through experience, and is able to pick up on many more nuances that a human programmer or scientist may not be able to imagine from the get-go.

Do you think we have the techniques to build a general intelligence, we just... like we don't know how to? Not everybody would agree with that I would assume, but do you think we kind of know all the basics and we just don't know how to put them together, or are we nowhere near understanding how general intelligence works?

Well, I think first of all, I think you're very right that there are a lot of different opinions out there, and it's really hard to say with certainty where we're going to be. My personal belief is that, it is possible. I think we will achieve it, I think we're quite a way away from it though, hesitate to put a number of years on it, but I certainly don't think that's within the next 5-10 years.

So I hear numbers between 5 and 500 and then a lot of people settle around 20 or 25, and I've always been a little suspicious, because it's like far enough out that nobody's going to remember you said that, but it's close enough that it's kind of real, like you have to deal with it. So, do you believe that a general intelligence is an evolutionary development from the narrow intelligence we have now? Like, I'm kind of re-phrasing my question from before: Are we on the path to building a general intelligence, or is it more like, no, a narrow AI only shares the word “intelligent,” but it doesn't really have anything to do with how it is that people have this versatile general intelligence?

Yeah, I think we're on the path, but I personally believe we're quite a way away. The AI that we use today is really very narrowly focused as you said, and I believe that there's substantially more learning that's needed, discovery that's needed, in order to chart a course, but I believe we're building the tools and the foundation to make progress. And I believe where a lot of people start to predict dates much earlier in that spectrum of 5 to 500, is because of the rate of advancement, and there's this assumption, that just because we're getting very good at narrowly-defined artificial intelligence, very quickly, that that somehow means we're going to jump the chasm and arrive at general intelligence, and I'm not so sure that's the case.

I'm going to play devil's advocate for a moment. So, we have the brain and we don't really understand how it works, we don't know how a thought is encoded and how it does what it does. Then we have a mind, which, however you want to define it, it's some set of capabilities that don't seem to be things that an organ should be able to do. Like, your liver isn't creative and doesn't have a sense of humor, but you do, and that is your “mind.” Then we have consciousness, we're able to experience the world instead of just measure it. It’s called “qualia”—we stub our toe and it hurts. Whereas a computer, if you drop a computer, it can measure that, but it can never feel it.

So, couldn't it be the case that you have to be able to do what the brain does, be able to do what the mind does, and perhaps you need consciousness to be generally intelligent? Yet, people say, "oh yeah, we can build that, we're on the way to building that already." There's a huge disconnect in that to me. How can you be so confident that we can build it, when there's so many aspects of intelligence that we are so far from understanding?

I actually really agree with you I know you meant it to be a devil's advocate type of question, but I agree with you, and that's one of the reasons why I think we will get there. But we have a lot to learn and the mind/body debate has been around for a long time, and understanding how all these pieces come together, are we just a sum of our parts or is there something magical and spiritual in there?

There's a lot of questions that we just don't know, and I agree with you, I think it is actually one of the warning signs of potentially hitting another AI winter, is the expectations get so far ahead of reality, that people become disenchanted and then a series of events happen and we get into another winter. So I think it's really important to keep people focused on, you know, the problems we are solving today, the advancements we are making today, and those alone bring a host of challenges that we need to start dealing with as a society.

So again, I kind of have a bit of a disconnect. You're confident we're going to do it, but, so where does your confidence come from? I mean if, in fact we are not solely machines, that there's something non-mechanistic about how we do our thing, some kind of strong emergence or something else, how can you be so confident that we're going to build it? We're going to build a general intelligence, even in 25,000 years?

That's a good question and I think that it, I don't have a very good scholarly answer for why I personally believe we're going to get there. I think we look at the track record of what we have achieved, the advances that we're making both on the computational side, and on the side of understanding the brain, because we are slowly understanding more and more of how the brain works, and there has definitely been a pattern where things that seemed mysterious in the past are now somewhat understood, and I believe that's going to continue. I have no real insight into the speed at which it's going to happen, but I believe we're going to get there.

I'm going to give you 4 down sides to AI. I want you to pick one you're most worried about and let's talk about it, okay? They're very fast. One is, AI's going to take all the jobs, one is that AI will be used by government to basically track everything about everybody, privacy will be gone. One is that AI will be used in warfare to make machines that make kill decisions on their own, and one is that AI will be used to exploit security vulnerabilities in everything from your toaster to the national power grid. So, jobs, security, warfare, privacy. What worries you the most?

Well they all worry me.

Well let's talk about them all then!

Let's talk about them all...

Pick one of them.

So, security vulnerabilities: I've spent quite a bit of my career focused on security, cybersecurity, internet security, and I think the rate at which, as we create vulnerable software, and the rate at which human malware researchers, both good and bad are able to find those vulnerabilities, once you unleash an artificial intelligence intent on finding those vulnerabilities which I'm sure people are already doing, that's just going to increase. And I've been nothing but... surprised isn't the right word since it's happened so many times, but we have devices and technology, that are released into the world with incredibly poorly thought out, whether it be network communications stacks or security infrastructure around them. That it really is a critical issue.

You know there's dueling software, you kind of hope that good guys and bad guys are going to stay in parity, but anything that's like a chip, like chips in IoT devices, those are not upgradable and those will still be used years from now and their security vulnerabilities are widely known, and they can't be patched, is that correct?

That is correct, and in fact I think Intel has just experienced a series of vulnerabilities in their hardware that has been very difficult for the entire world to deal with. So I think that is a problem, but I also believe that what you hinted at is a real positive, that we can also employ AI in a way to protect ourselves, and there are a number… it's hard to find a company that isn't using AI today, but there are definitely a number of security companies today that are taking an AI-based approach to discovering issues, a whole new take on vulnerability management, but also whole new takes on endpoint protection etc., that are more behavior and detection-based than signature based, so, I think, as always it's attack and counter measure, the bad guys do something, the good guys respond, and we can only hope that those counter-measures keep up with the attacks.

So jobs. You said you're worried about that too, so you're not in the camp of people who think, "No, automation's awesome, it increases productivity, and there's no way in the world anybody could be against increased productivity, because increased productivity is how we get higher wages." So what worries you about automation and about artificial intelligence automation on jobs?

So that's true, automation in general is good, it does increase productivity and, historically, we have had automation happening for a very long time, and there has always been a worry that automation is going to do irreparable harm to the workforce, but we manage and we overcome. What's different right now, in my opinion, and I think many people agree with this, is that it's the nature of the job and the type of the job that we're able to displace. AI is becoming capable with more and more cognitive type of tasks, replacing more and more skilled jobs and that's increasing at an increasing rate. It's accelerating. I think a big challenge here is the rate of change, and combined with [the fact that] I see very little movement in governments and policy development, to help deal with the change.

So, I'll push back on this. Tell me one cognitive job you think AI is going to replace, and don't give me, "It'll read X-rays," don't do that one. Tell me a cognitive job that will actually replace humans.

Well I can give you an example of something we're working on right now in identity verification. One mechanism for verifying identities is to have a human expert who is very good at, for example, inspecting an identity document for signs of forgery and tampering. It may be debatable as to how much of a cognitive job you think that is, but that's something that was difficult for us to automate in the past, but now we have the tools to do it and to do it at scale, and so within my own area, and in a relatively small scale, we are seeing that replacement of that job already.

So, let me take a different tack at it. In 1994/1995, if you came along and said, "hey, you know what would be really cool, if we took a billion computers and connected them all with http, and used HTML and put up websites, and you know what that's going to do, it's going to create trillions of dollars in wealth, it's going to make something called Google, something called Amazon, something called eBay, something called Etsy, it's going to make something called Baidu, it's going to make Alibaba, it's going to make Airbnb, it's going to make a thousand other companies, employing thousands of people." You would have said, "well that doesn't seem possible, that's such a..." You would say, "wow no the computer's actually going to destroy cognitive jobs, it's not going to create them." But what we had was a technology like digital computer technology that clearly has had more job creation to it, than job loss. How is AI different from that?

Well, that's a great question, and, for example, you have, in the retailing space a company like Amazon that is causing significant change to the retail space, bringing a lot of that work online using the internet, and you don't really see a replacement of that type of retail. It's just a new type, and yes it's creating more jobs, but it's creating different types of jobs.

So, my concern isn't that overall, the number of jobs goes away. I think that the types of jobs need to change, and that we need to do more as a society to prepare our workforce for that type of change. I think, at the end of the day, I believe it creates a stronger economy, a more interesting type of job, more interesting work, and an incredible freedom to people, but if we're not ready for that change, then it's going to be very hard to adapt.

So, when the assembly line came out, that was a kind of artificial intelligence, and that was something that, if you were an artisan, that was a frightening kind of technology. And when the steam engine came out, that's kind of an AI in a way, it's artificial power, it replaced animal power. Electricity the same way, these amazingly disruptive technologies came along, and there wasn't some "okay the assembly line has come out, let's re-think how we train people or whatever." And yet, unemployment never really surged because of those things. People are very versatile and they learn new things and they learn new skills and they pick it all up, so why do you think there needs to be this wholesale... do you still think this time it's different, AI is somehow different than the assembly line or electricity in how it's going to impact the workforce?

I do and I think it's the speed. The assembly line didn't happen as rapidly and its impacts weren't as rapid as the impacts that we are seeing right now, and will continue to see from this current wave of AI, and I say "current wave of AI," because, obviously it's been around for a very long time, we're just advancing very quickly right now.

So you said, in my 4 worries, privacy, so the set up for that one is: you could tap my phone or follow me, or whatever but I was lost in the sea of other people's data. With AI you record every conversation, presumably process it and analyze it and cross-reference it, and you can read lips with cameras and you can do all of these things. Do you believe that AI has the capability to end privacy, especially in places that don't have legal safeguards against anything like that?

Without legal safeguards I think, yes, it's a real challenge, but I don't believe we should give up on the demand to maintain our privacy, but I think that what privacy means in the future will likely be a very different thing than what privacy means right now today.

That sounds very newspeak, "Ahh no, when everybody knows everything about you, that's real privacy." So what do you mean by that, how can privacy change and still be privacy?

No, what I don't mean by that is, I don't mean, "hey we should just put everything out there, and now since everything's out there we're not as worried about privacy and therefore it has a different definition." I think that we need to continue to put the controls in place and ensure that we can have privacy and that we can have in certain scenarios… it's perfectly fine for me to be anonymous and not have to be a specifically identified person, but in other cases I do need to be identified. So, I don't think it's inevitable that because this information's out there, that we lose our privacy.

But with the same kind of system that takes the data, the million cancer patients and looks for telling signs, you could take the activities of a million people and figure out their political parties or figure out any number of things about them. The technology's kind of neutral about how it's used. So, what is to stop, do you think, any large actor from essentially modeling, and it doesn't have to be government at this point, it can just be a super large company that has an enormous amount of data, essentially modeling every person and owning that data, they will own it at that point, and then they can traffic in it and sell it, and act on it and do all of that. Are we naive to say, "oh I just don't like that?” Or is it like... what's your take away from that?

Well I think technically what you're saying is absolutely doable and there is quite a bit of that happening right now. I mean, these very large companies shave access to very telling information about all of us. Really the only way, or not the only, but one way that we can address that is through stronger regulation and more policy. This is an area where I think governments do need to become more active in protecting what kind of data can be collected and what it's allowed to be used for.

And then finally of my 4 worries is warfare. Militaries would use artificial intelligence under the theory that it's more efficient and more cost-effective and can do a better job in certain circumstances than a human, so the trend would be to make machines that can make autonomous kill decisions. Do you believe that that's inevitable and is that fine and we just need to kind of grow up and deal with it or what?

Well I do believe it's the trend, I hope it's not inevitable. I don't agree that that would be a type of world that I would like to live in.

But don't we already have it, like isn't a land mine an artificial intelligence that makes an autonomous kill decision?

I don't believe so.

If it is over 50lbs, I will kill it, if it steps on me and it's over 50lbs. I mean that's just simple programming logic, right?

That is simple programming logic but I'm not sure I would call that AI, and I think that's considerably different than having a weapon that is enabled to go look and find and make decisions and possibly mistakes on...

But if somebody had a landmine and they said, "yeah those old landmines, they were terrible, but look, this new one, it has a camera, and it can actually make sure someone's wearing fatigues, and it has a sniffer sensor, and it can smell gunpowder, and so it's going to make sure that it's carrying some ordnance of some kind," so it's a better landmine, it doesn't just indiscriminately… it's a smarter one. Wouldn't you prefer that landmine over the old dumb one?

Possibly, but again I think that there is a distinction in my mind around, it's an improvement to conventional ordnance type of view, versus the robotic AI that's out and about seeking. It seems to be a big difference to me.

So anybody who has listened much knows, I'm actually quite optimistic about all of these technologies. I am squarely in the techno-optimist camp, so now that we've kind of taken off our rose colored glasses and looked at some of the things, take me a minute and paint a picture of what good you think this new technology's going to bring to the world. What is worth all of that other heartache? Tell me that.

I think, as is the case with every technological advancement, there is always good and bad that can come out of it.

I don't know, you know that ice-cream syrup that hardens on the ice-cream? I kind of think that that's just completely good. It's really hard for me to see...

I'm a tech optimist as well and that's really what keeps me excited and motivated in the field is that you see that there's incredible opportunities for good to come out of this technology. I think one really inspiring example is AI-driven personalized healthcare in medicine, and what can be the good that could be done to our world, to provide a really high level of healthcare to the far reaches of the Earth, and I think that that's something that is possible with these AI-assisted solutions. There are… a little more personal and close to home is: I'm really excited about self-driving cars, I know they get so much time in the news these days that it's hard to hear another story about them, but I have aging parents, and I would love the idea that an autonomous vehicle that I trust could go pick them up and take them shopping and make their lives easier. So I think that there is untold good that can come out of AI, and I think it's absolutely worth pushing forward. But we've got to keep the bad stuff in check.

So you're the CTO of Mitek, tell me a little about Mitek, what's its mission, how are you using artificial intelligence?

So our mission at Mitek is to help create safe digital spaces for businesses to conduct transactions with consumers. So you have a digitized world, more and more people want to conduct business from their mobile device, and the types of business we want to conduct are requiring higher and higher levels of identity assurance, so, Mitek is bringing solutions to market that allow, for example, a bank to enroll a new customer from their digital device, without requiring them to come into a branch, but at the same time, maintaining a high-level of assurance that that identity being provided is a real actual identity.

There's a lot of regulation that's encouraging financial services to ensure that the identities are proofed, know your customer requirements, any money laundering, regulations are really influential. But we also have customers over in, for example, the sharing economy, where, in order for someone to feel confident in entering into that shared economy, there has to be a certain level of trust and that trust has really got a solid foundation on top of identity. So at Mitek, we are using a combination of computer vision and machine learning to bring inspection of government issued identity documents to the market.

So, as it happens, literally today, I was setting up a wire transfer service so I could wire money internationally, and I won't say the name of the company, but it was actually all really nice and smooth and all that, but at one point they had me upload a scan of my passport. And then I got a notice probably 20 minutes later that I had been approved, so (a) do you think it was a machine that made that assessment, or and if not, (b) could a machine make that assessment? Could it really look at that scan of my passport and tell that I hadn't just copied and pasted and changed the name and all that?

That's a great question, so, on a couple different dimensions here, first of all, the scale of 25 minutes, that to me seems more like it went back to a human to look at the image and determine whether or not it was a valid passport. They might have done some data entry, some data extraction, and then maybe entered your passport ID into a system to see if it was a valid passport ID etc. But that's not how we do it at my company, and we don't really work with scans, per se. I get the impression that you did your own photocopy or image of the document and sent the image.

We have SDKs that are integrated directly into the mobile device, so you might have been coming in via desktop session. We would have handed that image capture task off to your mobile device, and then using our SDK on your mobile device, we're really using a video stream to inspect the document, capture the right number of images that would then be sent back to our backend, which happens to be a cloud-based backend, and there, those images are evaluated by a barrage of different algorithms, some of them very current, based on deep learning techniques, and some of them also using very classical computer vision techniques to look at various aspects of the document, and each document has built into it a set of security features and different attributes that can be inspected to look for, not only the validity of the information on the document, but evidence that the document's been tampered with or forged.

And then finally, very similar to, maybe like in a face recognition system, where you want to ensure that the image that's being presented to the camera is the image of a living human person and not a picture of a human person or a mask, so the anti-spoofing, we have liveness detection to ensure that we can tell that "hey we're not being presented a digitally modified picture of the document," but instead we're seeing a real actual physical document in front of the camera.

I'm curious, if, let's just take a passport in your system. If you saw a thousand, in the real world a thousand passports were submitted, you don't know how many fake ones may get through, obviously because if you knew they were fake, they wouldn't make it through. How much fraud would you expect to see out of a thousand, any or 1 or 27 or what?

Just about 100.

Wow, really. So 1 in 10 is bogus?

Well I'm sorry, my measures are off, I meant to do 1%, so 10 in 1,000, I'd expect to see 1% of that traffic based on the nature of the business that our current customers do, so one of the interesting things about our solution is that we have a team of human experts that are able to review the same images that the algorithms are reviewing, and for some of our customers, specifically in Europe, there are some specific business cases that require a human to look at that documentation still, it's just a government requirement, so we provide that service, but we also use that team of human experts to provide a quality metric on what our current algorithms are doing.

So we're able to measure, for example, how accurate are our authentication algorithms, compared to a group of human experts. Now that may not be perfect, but at least it gives you a very strong signal compared to a human expert, and from that, we're able to derive these high level metrics, and the fact that we're processing millions of transactions a month, we're seeing very broad scale trends in those documents and we're able to look across them at very large numbers.

And so right now the computer flags up that a human says "yep, that's fake."

No, sorry, I'm getting you a little confused between the runtime processing and the back office processing. So the bulk of our processing is 100% automated, but then what we're doing at the same time behind the scenes is we may re-process a certain percentage of that traffic through our human experts. For any traffic that comes in initially directed at our human experts, as I mentioned there's still some traffic that is of that nature, we will then use that to generate a set of images that we can test our algorithms on, so it's sort of a human-in-the-loop if you will, for our machine learning systems, where we have identity document experts that are able to provide us that ground truth on those images.

Got you. Well that is all fascinating, I think we're running out of time here, so I want to thank you for a wide ranging discussion about a lot of topics and it sounds like you're doing interesting and useful work, so keep it up.

All right, thanks a lot.

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.