In this episode, Byron Reese speaks with Norman Sadeh from Carnegie Mellon University about the nature of intelligence and how AI effects our privacy.
- Subscribe to Voices in AI
- Google Play
Norman M. Sadeh is a professor in the School of Computer Science at Carnegie Mellon University. He is director of CMU’s Mobile Commerce Laboratory and its e-Supply Chain Management Laboratory, co-founder of the School’s Ph.D. Program in Societal Computing (formerly “Computation, Organizations and Society”) and co-director of the MSIT Program in Privacy Engineering. He also co-founded and directs the MBA track in Technology Leadership launched jointly by the Tepper School of Business and the School of Computer Science in 2005. Over the past dozen years, Sadeh’s primary research focus has been in the area of mobile and pervasive computing, cybersecurity, online privacy, user-oriented machine learning, and semantic web technologies, with a particular focus on mobile and social networking.
Byron Reese: This is Voices in AI brought to you by GigaOm I’m Byron Reese, today my guest is Norman Sadeh. He is a professor at Carnegie Mellon School of Computer Science. He’s affiliated with Cylab which is well known for their seminal work in AI planning and scheduling, and he is an authority on computer privacy. Welcome to the show.
Carnegie Mellon has this amazing reputation in the AI world. It's arguably second to none. There are a few university campuses that seem to really... there's Toronto and MIT, and in Carnegie Mellon's case, how did AI become such a central focus?
Norman Sadeh: Well, this is one of the birthplaces of AI, and so the people who founded our computer science department included Herbert Simon and Allen Newell who are viewed as two of the four founders of AI. And so they contributed to the early research in that space. They helped frame many of the problems that people are still working on today, and they helped recruit also many more faculty over the years that have contributed to making Carnegie Mellon as the place that many people refer to as being the number one place in AI here in the US.
Not to say that there are not other many good places out there, but CMU is clearly a place where a lot of the leading research has been conducted over the years, whether you are looking at autonomous vehicles – for instance, I remember when I came here to do my PhD back in 1997, there was research going on autonomous vehicles. Obviously the vehicles were a lot clumsier than they are today, not moving quite as fast, but there's a very, very long history of AI research, here at Carnegie Mellon. The same is true for language technology, the same is true for robotics, you name it. There are lots and lots of people here who are doing truly amazing things.
When I stop and think about [how] 99.9% of the money spent in AI is for so-called Narrow AI—trying to solve a specific problem often using machine learning. But the thing that gets written about and is shown in science fiction is ‘general intelligence’ which is a much more problematic topic. And when I stop to think about who's actually working on general intelligence, I don't actually get too many names. There's OpenAI, Google, but I often hear you guys mentioned: Carnegie Mellon. Would you say there are people in a serious way thinking about how do you solve for general intelligence?
Absolutely. And so going back to our founders again, Allen Newell was one of the first people to develop what you referred to as a general theory of cognition, and obviously that theory has evolved quite a bit, and it didn't include anything like neural networks. But there's been a long history of efforts on working on general AI here at CMU.
And you're completely true, that as an applied [science] university also, we've learned that just working on these long-term goals is not necessarily the easiest way to secure funding, and that it really pays to also have shorter term objectives along the way, things that can solve the accomplishments that can help motivate more funding coming your way. And so, it is absolutely correct that many of the AI efforts that you're going to find, and that's also true at Carnegie Mellon, will be focused on more narrow types of problems, problems where we're likely to be able to make a difference in the short to mid-term, rather than just focusing on these longer and loftier goals of building general AI. But we do have a lot of researchers also working on this broader vision of general AI.
And if you were a betting man and somebody said ”Do you believe that general intelligence is kind of an evolutionary [thing]... that basically the techniques we have for Narrow AI, they're going to get better and better and better, and bigger datasets, and we're going to get smarter, and that it's gradually going to become a general intelligence?”
Or are you of the opinion that general intelligence is something completely different than what we're doing now—and what we're doing now is just like simulated intelligence—we just kind of fake it (because it's so narrow) into tasks? Do you think general AI is a completely different thing or it will gradually get to it with the techniques we have?
So AI has become such a broad field that it's very hard to answer this question in one sentence. You have techniques that have come out under the umbrella of AI that are highly specialized and that are not terribly likely, I believe, to contribute to a general theory of AI. And then you have I think, broader techniques that are more likely to contribute to developing this higher level of functionality that you might refer to as ‘general AI.’
And so, I would certainly think that a lot of the work that has been done in deep learning, neural networks, those types of things are likely over time with obviously a number of additional developments that people have, a number of additional inventions that people have to come up with, but I would imagine that has a much better chance of getting us there than perhaps more narrow, yet equally useful technologies that might have been developed in fields like scheduling and perhaps planning and perhaps other areas of that type where there's been amazing contributions, but it's not clear how those contributions will necessarily lead to a general AI over the years. So mixed answer, but hopefully...
You just made passing reference to ‘AI means so many things and it's such a broad term that may not even be terribly useful,’ and that comes from the fact that intelligence is something that doesn't have a consensus definition. So nobody agrees on what intelligence is. Is that meaningful? Why is it that something so intrinsic to humans: intelligence, we don't even agree on what it is? What does that mean to you?
Well, it's fascinating, isn't it, that there used to be this joke and maybe it's still around today, that AI was whatever it is that you could not solve, and as soon as you would solve it, it was no longer viewed as being AI. So in the ‘60s, for instance, there was this program that people still often talk about called Eliza...
Right, exactly, simple Rogerian therapist, basically a collection of rules that was very good at sounding like a human being. Effectively what it was doing is, it was paraphrasing what we would tell you and say, “well, why do you think that?” And it was realistic enough to convince people that they were talking to a human being, while in fact they were just talking to a computer program. And so, if you had asked people who had been fooled by the system, whether they were really dealing with AI, they would have told you, “yes, this has to be AI.”
Obviously we no longer believe in that today, and we place the bar a lot higher when it comes to AI. But there is still that tendency to think that somehow intelligence cannot be reproduced, and surely if you can get some kind of computer or whatever sort of computer you might be talking about to emulate that sort of functionality and to produce that sort of functionality, then surely this cannot be intelligence, it's got to be some kind of a trick. But obviously, if you also look over the years, we've gotten computers to do all sorts of tasks that we thought perhaps were going to be beyond the reach of these computers.
And so, I think we're making progress towards emulating many of the activities that would traditionally be viewed as being part of human intelligence. And yet, as you pointed out, I think at the beginning, there is a lot more to be done. So common sense reasoning, general intelligence, those are the more elusive tasks just because of the diversity of – the diverse facility that you need to exhibit in order to truly be able to reproduce that functionality in a scalable and general manner, and that's obviously the big challenge for research in AI over the years to come.
Are we going to get there or not? I think that eventually we will. How long it's going to take us to get there? I wouldn't dare to predict, but I think that at some point we will get there, at some point we will likely build – and we've already done that in some fields, we will likely build functionality that exceeds the capability of human beings. We've done that with facial recognition, we've done that with chess, we've done that actually in a number of different sectors. We might very well have done that – we're not quite there, but we might very well at some point get that in the area of autonomous driving as well.
So you mentioned common sense, and it's true that every Turing test capable chatbot I come across, I ask the same question which is, “What's bigger, a nickel or the Sun?” And I've never had one that could answer it. Because nickel is ambiguous... That seems to a human to be a very simple question, and yet it turns out, it isn't. Why is that?
And I think at the Allen Institute, they're working on common sense and trying to get AI to pass like 5th grade science tests, but why is that? What is it that humans can do that we haven't figured out how to get machines to do that enables us to have common sense and them not to?
Right. So these are, amazingly enough, when people started working in AI, they saw that the toughest tasks for computers to solve would be tasks such as doing math or playing a game of chess. And they thought that the easiest ones would be the sorts of things that kids, five-year-olds or seven-year-olds are able to do. It turned out to be the opposite, it turned out that the kinds of tasks that a five-year-old or a seven-year-old can do are still the tasks that are eluding computers today.
And a big part of that is common sense reasoning, and that's the state of the art today. So it's the ability to somehow – so we're very good at building computers that are going to be ‘one-track mind’ types of computers if you want. They're going to be very good at solving these very specialized tasks, and as long as you keep on giving them problems of the same type, they're going to continue to do extremely well, and actually better than human beings.
But as soon as you're falling out of that sort of well-defined space, and you're opening up the set of context and a set of problems that you're going to be presenting to computers, then you find that it's a lot more challenging to build a program that's always capable of falling back on its feet. That's really what we're dealing with today.
Well, you know people do transfered learning very well, we take the stuff that we...
With occasional mistakes too, we are not perfect.
No, but if I told you to picture two fish: one is swimming in the ocean, and one is the same fish in formaldehyde in a laboratory. It's safe to say you don't sit around thinking about that all day. And then I say, “Are they at the same temperature?” You would probably say no. “Do they smell the same?” No. “Are they the same weight?” Yeah. And you can you can answer all these questions because you have this model I guess, of how the world works.
And why are we not able yet to instantiate that into a machine do you think, Is it that we don't know how, or we don't have the computers, or we don't have the data or we don't know how to build an unsupervised learner, or what?
So there are multiple answers to this question. There are people who are of the view that it's just an engineering problem, and that if in fact, you were to use the tools that we have available today, and you just use them to populate these massive knowledge bases with all the facts that are out there, you might be able to produce some of the intelligence that we are missing today in computers. There's been an effort like that called Cyc.
I don't know if you are familiar with Doug Lenat, and he's been doing this for, I don't know, how many years at this point. I'm thinking something like close to 30 plus years, and he's built a massive knowledge base and actually with some impressive results. And at the same time, I would argue that it's probably not enough. It's more than just having all the facts, it's also the ability to adapt and the ability to discover things that were not necessarily pre-programmed.
And that's where I think these more flexible ways of reasoning that are also more approximate in nature and that are closer to the types of technologies that we've seen developed under the umbrella of neural networks and deep learning, that's where I think there's a lot of promise also. And so, ultimately I think we're going to need to marry these two different approaches to eventually get to a point where we can start mimicking some of that common sense reasoning that we human beings tend to be pretty good at.
So another sort of thing like that is the way children can learn. You can train them with the sample size of one, you could draw – I'm a poor artist, but I could draw a cat, and then I could show that to a child, and the child could identify a cat from that, and maybe an actual photograph of a cat from my pencil drawing. And then the child could maybe see one of those manx cuts that doesn't have a tail and they could say, ‘look, there's a cat without a tail,’ even though they've never been told there was such a thing as a cat without a tail, because it retains enough of this ‘cat-ness’ about it that it's like a cat without a tail.
So what do you think we do there that we haven't been able to teach computers?
Well, that's where these neural network techniques are really coming into play and are really making a big difference. So it's this multi-layered form of processing where you're looking for different levels of patterns and you're ultimately able to recognize that this is a cat—there's just a tail that's missing. And so, that's what I think you have that tremendous potential that was not really – was dismissed until maybe just five years ago by many people working AI, and all of a sudden, with the results that we start getting in domains such as computer vision, facial recognition, natural language processing, people have started saying, “Oh my goodness, look at what we can do here!”
And I think there's a lot more of these types of results that we're likely to see over the years to come. I don't know that it's necessarily going to be enough to get us to that general AI that we were talking about earlier, but that's certainly going to get us a step closer.
We have these brains that we don't understand how they work, we don't really know how a thought is encoded, and we don't even know how a nematode worm encodes its thoughts, so let alone a human. And then we have minds, we have these emergent abilities like creativity and humor and all of these things, and we have consciousness in that we experience the world. And all three of those: our brains, our minds, and consciousness are not scientifically understood, and yet almost universally...
There's been tremendous progress though in this space over the past 25 years, as I'm sure you're aware, right?
Well, if you just went with consciousness—and it's been called the last scientific question nobody even knows how to pose scientifically nor what the answer would look like—and if I ask you, “what was the color of your first bicycle?” you can probably answer that, but there's not like a bicycle storage area in your brain that you retrieved it from.
So I mean, we fundamentally don't know how our brains do what they do and how thoughts are encoded. And yet, universally on this show, people say we're going to be able to eventually – as you just did, eventually we'll build a general intelligence. And it seems to me that that conclusion is only true if you have like a purely mechanistic reductionist view, like there's no reason to believe we're going to build a general intelligence other than we believe we're machines and therefore we can build a machine that thinks.
But we don't have any independent evidence that we can build general intelligence, like we can't even do ridiculously basic things, let alone build it. And so, it's almost feels to me like an article of faith that people believe we can build it only because they believe we're machines. Would you agree with that?
So I believe that, first of all, there are probably multiple ways of building general AI, and one of those ways would obviously be to replicating the machinery that we've got in our brains, and there's certainly a lot of inspiration to be drawn from that. But it may not be the only way, and there might be variations of that that might be possible too. So understanding the brain I think has inspired many people who've been working in AI, especially people like Geoff Hinton and his operators. You were referring to the University of Toronto earlier. And I think that there's been tremendous progress in understanding how the brain works over the past 25-30 years.
I was just recently reading this book by Eric Kandel for instance – I think it's called In Search Of Memory, and it's amazing how he does such a great job at recounting all the progress that has been made in this space in terms of understanding intelligence, understanding memory, and that clearly goes to the core of the history of consciousness and how do you go about understanding consciousness. Clearly this is not a problem that we've cracked today, but there's been so much progress in showing that at the end of the day, you can probably look at the brain as a machine that's obviously exposed to just such a huge number of contexts and situations that obviously each one of us ends up learning things.
There's interesting work going on here at Carnegie Mellon on brain imaging and reasoning to sort of understand how different types of intellectual activities tend to result in different types of activity patterns in the brain and so on. There's similar research being done elsewhere that can really help you illuminate those aspects. But again, I would not necessarily limit myself to thinking that the only way to building a general AI will be by building an artificial brain. There might be other ways of building this general AI, including ways that might lead us to building functionality that's superior to what our human brains are capable of doing.
So let's switch gears and talk about a topic you know in and out which is privacy. So people who listen to this show know that I'm an optimist about the future, I believe we're going to use technology to solve all kinds of problems. But there are a couple of chilling scenarios that come out, and one of them is: we've only ever had privacy in this world because there's so many of us, you can't follow everybody at once, you can't listen to every phone call and all the rest, and now you can. The cameras can read lips as well as a human, every phone call can be [tracked] – and interestingly, the very tools we build to do good with AI, like spot tumors, are the same tools that can be used to spot people who disagree with the government or what have you.
Tell me, do you agree that that AI poses a real threat to privacy, and we're just talking of the US for now, for just a moment, and is there a solution to that?
I think that there is a solution to that, and you've asked a number of different questions here, so I would say, at a high level, there's been an explosion in the amount of data that's available, and that's been a boon obviously to people who work in machine learning and AI. And I think we've all become a little drunk, because we've had so much of that data and we've been able to do all sorts of things that we could not even imagine being able to do 20 years ago.
But in the process, we have not paid enough attention to the responsibilities that come along with having access to all this data and the fact that our people, the data subjects as they referred to in regulations, for instance, coming out of Europe, the people whose data has been collected have some expectations about how the data that's been collected about them should be used. And unless we align what we do with those expectations, we're going to end up in a situation that's highly undesirable.
And that's a lot of what we've seen I think over the past few years: technologists have gained access to all this data and it's been obtained with very few constraints on what they can potentially do with it, to the point that they've said, “well, maybe I can start mining this thing and see whether or not I can find something useful.” There's regulation – one of the basic principles behind privacy is that when you collect data, you should be collecting that data for a specific purpose, and ideally you would want to make sure that the person whose data is being collected is okay with that collection and okay with the use in which that data is going to be processed.
And if all of a sudden you take that data and start using it for an entirely different purpose, you should probably go back to that person and verify that that person that data subject is also okay with that other use. That's not something that we see very often today, especially here in the US. But we've seen progress. So you have to recognize that including companies that are in the press for issues with privacy, such as Facebook or Google, these companies have made an effort in offering users better control over the processing of their data and making more permissions available in the context of apps, making more settings available in the context of browsers, being more transparent about the data that they've collected about us and the like.
But it's also clear that not too many people engage with this functionality. There are privacy policies out there that in principle disclose some of the things I was talking about, but nobody has the time to read these privacy policies; nobody has time to configure all these settings. And so one of the things my group and I have been advocating is actually the use of AI to help people regain control over their data, the collection and use of their data, and help them in fact deal with all these privacy policies that they, on their own, with their limited human brains, don't have the time to read, help them configure settings, help them make better privacy decisions,so that at the end of the day, what takes place is better aligned with what they feel comfortable with.
So you're specifically talking there about the legal requirements of private enterprises with the relationship to their customers, making those more transparent and more user configurable without – and making that easy for people to do.
And so, do you envision something like a GDPR for data that kind of rigidly defines this and criminalizes certain actions and all of that or how would you do that, or is it completely voluntary by these companies? What, from a practical standpoint, do you suggest?
Well, first of all, there are many dimensions to this. So in general, I don't think it's a good idea for a company to be doing something with the data of their customers, that their customers would not feel comfortable with. So it's not a good business practice. If one day your customer finds out that you've been using their data in a way that...
But the customer isn't really a homogenous unit, there's a million customers and some are fine with anything and some are incredibly sensitive, and then there's a big part in the middle, so how does a company even navigate that?
Well, that's a very good point. That's why privacy is complex, – so privacy is not black and white. We've done a lot of work on modeling people's privacy preferences, and we've seen exactly what you've pointed out, you know, domain after domain after domain that it's not ‘one size fits all.’ If it was one size fits all, it would be great, it would be easy to regulate, it would be like, well, nobody feels comfortable with that, therefore we shouldn't do it; everybody feels comfortable with that, therefore it's okay to do it – end of the story.
But the vast majority of people fall between these two extremes, and they've got different views. It varies based on, for instance, are you willing to share all your health data with the world. Well, there are people who feel that they are very healthy and have nothing to hide, and they don't care. And there are people who have got medical conditions, and if this information was potentially falling in the wrong hands, they may not be able to get insurance coverage anymore, and so, they do care. And these things vary quite a bit from one person to another, and this is just one example, but where you go, who your friends are, sexual orientation, all these sorts of things, you're going to find very different views on what people are willing to disclose and what they're not willing to disclose and with whom and for what purpose and so on.
And so there is this inherent tension between privacy and usability. So if you want to allow people to control what happens to their data, you need to give them all these potential settings and configurations that they've got to select from, and that's clearly not very practical, we've gone down this path. If you look today at the mobile app user, most people have today something like 50 apps on their cell phone, if not more. If these apps require access to two or three permissions, like your location, your calendar, texting or what have you, and before you know it, you've got to configure a hundred different settings just from the mobile apps on your cell phone. Nobody has the time to do that.
And so, one of the things we have shown is that using machine learning, we can actually do a pretty good job at very quickly learning people's privacy preferences, and we can then help them configure many of these settings without imposing as heavy a burden on them as you would otherwise with the settings that are available today. We've shown that we can automatically read the text of privacy policies on their behalf and actually highlight things they would want to know about.
And so, those are ways in which AI, as it turns out, which is often being depicted as being the antithesis of privacy, AI can actually contribute to restoring some modicum of privacy and empowering people to better control their data. So that's one that our group is well known for. (I never answered your question about the GDPR, but I've already spoken for quite a bit, so I'll let you decide whether you want to redirect.)
Well, I would ask slightly different... I get what you're saying that in a country that has a rule of law and transparency, and in a country like the United States, those sorts of protections can be legislated and businesses will to a more or less extent follow those and all the rest. Are you concerned with the application of this technology in other countries that don't have... by states, by governments against their own people, that don't actually have any notion of individual privacy being a right?
I am very concerned. I am among those people who think that privacy is a fundamental human right. It is definitely part of the UN Charter of Human Rights, and I'm very concerned about what you see in countries where – I mean, let's name them: countries like China or Russia or countries which have even fewer freedoms. I'm also very concerned about what we're seeing here in the US.
I mean, we've seen privacy for a long time was depicted as, okay, well, this is about whether or not people will use your data to decide what ads they're going to target to you. That's such a mundane problem, but we've seen that privacy goes well beyond that. So privacy extends to manipulation of the information that we see, a manipulation of opinion, manipulation of elections. And I think that people have come to realize this, and they've come to realize that hey, this privacy thing is actually a lot bigger than I realized, and it does go to the fundamentals of who we are as a country, as a society, and so on.
It's just an almost intractable problem it seems, because privacy is a nebulous idea, and the presumption of privacy varies by circumstance. I mean, I would expect I would have more privacy in the medical field than I would at an amusement park or something. It varies from person to person and people themselves don't really necessarily know what they want or not, and you can ask a question in such a way that they're like, oh, I guess, I'm okay with that. And you could ask the same question in a way, whatever you like, no, that sounds creepy. But it's the same thing, and so it almost feels like as noble as what you're trying to do, it's almost an intractable problem. So give me some reason to think I'm probably wrong with that, that it is a solvable problem.
Well, I would argue you could say the same thing about security, it doesn't mean – so it's not because something is challenging and complex that you want to give up. And can self-regulation get you there? We've tried that for the past 20 years or so if not longer, and I think we've seen where we are today. So I do feel that there's a need for regulation in this space.
I think that these are very complex problems and you were pointing, for instance, to the fact that you can manipulate people. We've done research on that, and we've shown that indeed people's privacy preferences are highly malleable. Depending on how you ask people a question, you're going to get a very different answer. So does this mean that you should not let people control these things and not let them make decisions? No, it means you've got to impose requirements in terms of how you go about asking these questions. You can phrase questions about whether or not people are willing to grant you access to some data in a way that everybody is going to say ‘yes.’
So for awhile, for instance, we were talking about mobile apps earlier, early versions of Android were basically asking people privacy questions at a time they were downloading the apps on their cell phone. It's well-known that we all have cognitive and behavioral biases that will lead us to always favor the short term rewards over the long term, less obvious consequences. And so, if you download your favorite game app, and at the time you're downloading this app, someone says, and by the way, are you willing to grant Angry Bird access to your location and this and that, the vast majority of people are going to say ‘yes.’ On the other hand, if you ask that question differently and at a different point in time or you give people a chance to revisit this decision that they've made, perhaps by telling them that, hey, by the way, your location has been collected by your apps 7398 times over the past week, and if you'd like to know more, click here, you can review the permissions that you've granted different apps, people will respond very differently.
And so those are actually not random examples, these are technologies that we've piloted with great success, and with people converging towards settings that were much closer to their comfort level.
There's a box that has a name and I can't remember it, but it was a law passed some number of years ago that said every credit card offer has to have this box on the back, and the box has to say what is the annual fee, what is the APR, and it was a finite list of things. And you see it in every credit card offer you get. You can just flip that thing over and you see those things.
Is there an equivalent for privacy that we could do that there's like these eight things? Do we retain personally identifiable information (PII) about you, do we share that – I mean, is there a list like that?
There are some things like that you could do. The difference I believe between credit cards and computer technologies in general is that there are just so many more scenarios, so many different ways in which your data can be collected and used that it's very hard to reduce things down to eight boxes. Eight boxes would already be better than nothing, no question.
The boxes—so we've actually experimented with these sorts of things. We've conducted experiments where we've shown people entire privacy policies. We show them summary of privacy policies, we've shown them information extracted from privacy policies in different boxes, played with the number of boxes that we show to people, and there is a sweet spot. So if you show them too much information, they stop reading. If you show them too little, they get surprised by things you didn't have enough space to show them.
The kinds of things that people care about do vary also, and they vary at least based on two things, they vary based on what people already know and expect. So if you tell me that there's a camera that's watching me when I go through a TSA line, guess what? I'm not terribly surprised. I think I've known that for a while. And so that may not necessarily be something that you need to tell me. On the other hand, if you tell me that there's a camera that's looking at me when I go shopping at my supermarket, I would probably want to know about that the first time, and then maybe after a while I might decide that, hey, every supermarket is doing that these days and this doesn't seem to be terribly traumatic and something I need to worry about too much, but different people might think it’s terrible.
So there's clearly a need to personalize also how you go about doing this. There's no way that you can get to that sweet spot with a one size fits all format, and that's where again AI and machine learning can come and help, because I can build models about what you know and what you care about and how often you would want to be notified about different types of practices when you walk into a room that has facial recognition, when you pass by some sensors that collect your location, all these sorts of things, different people have different views about that. And so knowing how you feel about these things, and how you want to be notified and how often can help me build knowledge that will do a better job at keeping you informed without feeling overwhelmed, and without feeling like you should turn off this functionality because it keeps on beeping every two minutes.
Do you think about it this way: that like I could ask you 20 questions that aren't technical, like, is it okay for advertisers to use what you've purchased in the past to recommend products to you? And I could do 20 questions like that that anybody could answer. They go, ‘yeah, I guess, it's okay.’ And from that you could infer with AI a range of things about what I find acceptable and not, and then map those to every site I visit. Is that how you think about it?
So we've built that and we call these ‘privacy assistants.’ We have one that's in the Google Play Store for mobile app permissions. The unfortunate thing is that you need to have a rooted Android phone for it to work. But it does work very well. And so people who've used it have all reported that they had a much better sense of control, and they had converged on settings that were much better aligned with their preferences, so yes. And we actually don't ask 20 questions, we ask between three and five questions, and we are able to predict about 80% of your settings.
I've gone over [time], because it's such a fascinating topic. I would like to close with one last question about – we were talking about state actors, and what I find disturbing is that some of them are productizing their ability to spy on their own citizens and export, and selling that to other oppressive regimes that couldn't even develop it on their own.
Is that something that you don't worry about in the long run because you're like, in the end, free societies will win? Or is that something where this technology will be used by people in power to retain their power in certain parts of the world?
I'm very concerned about that, and I think it could be, in all parts of the world potentially if you are a bit paranoid. Yes, I think, I'm very concerned about that. And so, how do you solve this? It's a very challenging problem, because these technologies are available, and there are some good uses for these technologies. I mean, terrorism is a real problem. National security, I mean, when it comes to privacy, one question where the vast majority of people feel the same, is security.
But when it comes to security a lot of people are willing to disclose quite a bit of information to ensure their security, not necessarily everything, but quite a bit of information. So these tools, these technologies are legitimate technologies, but as you pointed out, the challenge is: how do you prevent governments that are not legitimate governments, totalitarian regimes, from using these very same technologies for entirely different purposes? And so obviously, I don't know how you do that. I mean, I don't know how do you get the regime to not build a totalitarian regime, that's clearly not a technical question.
The thing is, they would say they only use them for security. They are just not going to believe that...
Anyone who opposes the regime is a threat to security. But anyway, that's okay, we don't have to solve everything in this one interview. I want to thank you so much for your time and all the work you're doing. It sounds like it's much needed, like we've finally have done this enough that we're starting to know like ‘these are the pitfalls, and here's the way to solve them,’ and it sounds like what you're doing is at the forefront of that. So I thank you for your work.
Well, Byron, thank you for your time. I really enjoyed talking with you.