Episode 102 – A Conversation with Steve Durbin

Byron Reese discusses security and its impact on AI as a whole with Managing Director Steve Durbin of the Information Security Forum.

:: ::

Guest

Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. He is a frequent speaker and commentator on technology and security issues.

Steve has considerable experience working in the technology and telecoms markets and was previously senior vice president at Gartner. He has served as an executive on the boards of public companies in the UK and Asia in both the technology consultancy services and software applications development sectors.

Transcript

Byron Reese: This is Voices in AI brought to you by GigaOm, and I'm Byron Reese. Today our guest is Steve Durbin. He is the managing director of the ISF, the Information Security Forum. His main areas of focus include strategy, information technology, cybersecurity and the emerging security threat landscape across the corporate and personal environment. He runs his company as the managing director, which he has been doing for almost a decade. Welcome to the show, Steve.

Steve Durbin: Nice to be here, Byron. Thanks for having me.

I always like to get our bearings real quickly. I normally ask what artificial intelligence is, but I'm going to give you a different kind of “getting our bearings.” It seems that through the history of code makers and code breakers it's been unclear who has the upper hand. And maybe it goes back and forth. Right now, when you look at the security landscape of the technologies out there, is it easier to be white hat or black hat?

I think that I'd have to say it's easier to be black hat. Why do I say that? I think that if we look at all the technology that's available, then we have to bear in mind that for every white hat there are probably at least two black hats that are making use of that very same technology, and they don't have some of the challenges that the white hats have. So, they're not as restricted in things like corporate governance, in things like budgets, in things like where they might practice and ply their trade. That's why I'm saying that for the time being anyway that the black hats probably have the upper hand.

That's a pretty provocative statement to say there are twice as many people trying to break security as trying to enforce it. I assume that's a gut feeling, but break that open a little bit… Where are all these bad guys?

Yeah, I think the major shift, Byron, has come about with crime as a service. So, if you roll it back to the good old bad days of probably only about three to five years ago, then you needed to have a certain amount of skill to be a black hat, to be a bad guy. Crime as a service then became very much more readily available, particularly on the dark web. And now you don't need to have some of that skill. You can, for instance, purchase denial of service attacks. They do come with 24-hour support. They do come with a hotline, provided you pay your bill… then you can pretty much try these things out.

And so, one of the concerns I think for everybody is that it isn't just the professional hacker, the professional black hat. We've also got now some amateurs that are plying their trade, and they're really starting to make use of some of these things. So that's why I'm saying that the number of the bad guys outweighs the good.

The other reason, of course, is that we know that there is a skill shortage – in terms of the good guys trying to find the right level of skill set, the right level of capability, and attracting them to your organization. That is proving to be a very difficult challenge to overcome.

So, geographically… I'm really intrigued by this. There are these companies I could just order up a denial of service that I can… from the way you described it, they have better tech support than some of the companies I call to try to get support. Are they concentrated geographically or dispersed throughout the world?

Well, one of the challenges for law enforcement of course is: How do you find where these people are? And the Internet has provided a means of bouncing traffic across multiple servers, across multiple geographies, that make it exceptionally difficult for law enforcement to catch these people. And therein lies one of the challenges. Even if you can track back crime that perhaps is being committed in, let's say, Denmark… and you know that the perpetrator is sitting in the Ukraine, being able to extradite that individual and actually nail them down is very very difficult. And that's just one of the challenges.

This really goes back to the point I was making about technology: Whilst advancing, whilst providing a lot of opportunities for the good guys, it is also being used to the same extent by the bad guys.

When you read about these breaches where 50 million people's credentials were stolen, and 100 million credit cards, and 60 million Social Security numbers and these staggering numbers… why isn't the world awash in more identity theft than it seems it is? We know credit cards still work. Right? My credit card fees when I process a credit card are 2.25 percent. I have credit cards that give me 2 percent cash back. Somehow my credit card company is living off half a point, which tells me there either is not enough fraud or they're not bearing the cost of it. All these numbers are small, but why don't we have this apocalypse? Why doesn't that crash the financial system… at least the retail financial system?

I think that's a really good question, and I think the answer to that is that we should never underestimate the amount of investment, the amount of skill the financial services organizations in particular have deployed in terms of monitoring what is happening – in terms of credit card transactions, being able to use systems to intelligently determine whether it's you, whether it's me, or whether it's a third party that is using the credit card, and to stop some of these things before they incur significant losses.

I think one of the other things that is going on in this space is encryption. That is making life still difficult for people who've stolen the information… to un-encrypt them unless they happen to have gotten the keys. In most cases that isn't happening.

So, there are some checks and balances in there that mean that even though we're seeing a lot of losses of really valuable information, it isn't being used at a fairly exponential rate to bankrupt organizations. And so I think that we have to give a little bit of credit to the financial services organizations in particular – because they've been the targets for quite some while now, because let's face it that's where the money is – [and to the way in which] they have been implementing systems in terms of fraud detection in particular and client notification and so on… and indeed collaboration amongst themselves to share details of the attacks and so on… We need to give them a little bit more credit in that space, I think.

So if I'm a black hat person, just a lone person, but I'm very talented and I live in a country where it would be hard to get at me, what is the lowest thing… the easiest thing to do to try to make money? Is it phishing scams? Is it trying to just get a couple of people's information and use it? Where do you see the most activity occurring right now?

I think the sort of individual that you're talking about is… I would describe as the “start out” black hat. What they're really trying to do is to see whether they can run a number of phishing scams. Those are relatively easy to do. They're relatively cost-effective in terms of the amount of money required to purchase some of those personal details. It's a small number of dollars. We're not talking massive amounts at all. If you send out enough of these, you will get some responses that will more than adequately cover your costs. So that for me is the “start out” guy.

The bigger area of concern – and this is really highlighted by people like Interpol for instance in their upcoming threat report that recently came out – is around the way in which ransomware is becoming much more of a targeting tool. So, looking at how you can really go after specific individuals or specific organizations with sophisticated ransomware, which certainly law enforcement is concerned about… To do that, of course, you need to be very much more sophisticated from a black hat point of view. We're not talking about the rank amateur who is just starting out here. We are talking about people who've been doing it for quite some while. And if we go on from there, then we move into the nation state environment, where you have some very highly sophisticated cyber criminals, who are looking to do everything from steal research and development to potentially attack critical infrastructure.

I certainly want to come back there, because that's a pretty exciting thing. But before we get there, let's talk about ransomware for a minute. I remember it wasn't long ago where a group of hospitals were hit, under the theory that they're going to have to pay. Right? Like quickly, because lives were threatened. What percentage of things like that does the general public hear about? Or is the incentive for people who must pay, generally speaking, to keep it very quiet, pay and not mention it?

I think there is always going to be an incentive – particularly if the information is critical – for you to be tempted to pay. Particularly if the amount that's being asked for isn't debilitating from an organizational standpoint. There are very few organizations out there that can really afford the disruption of a full-blown ransomware attack and not respond to it in some way shape or form. I mean there are a number that were hit by the NotPetya, for instance, who did just replace their entire infrastructure. If you think about that, for a large multinational… the amount of time, resources, money that's required to do all of that… very few organizations can do that. And you do have, particularly if we look at say the healthcare space, hospitals, organizations for which the primary business is patient care. It's about looking after you and me when we need it the most. And technology is a means of facilitating that. And so, I think that there is always going to be a temptation if the price is right to pay and move on.

But I come down on the same side as law enforcement on that one. That probably isn't the way to go… even though tempting. Because what you're doing is, you're sending a signal that says we do pay, we do reward blackmail of this nature. [It’s] very hard though if all your systems are on the floor and you've got a hospital full of people and you must revert to pen and paper. How long are you going to be able to do that? How long is that going to be sustainable for your business? So, I don't think it's an easy thing to answer or to recommend, but obviously you have to be aware that if you do pay once there is a good chance that the people may come back and ask for more at a later point in time. And that's just something you need to be aware of.

You mentioned encryption was still strong, that encrypted messages are still hard to break. And I guess that works in reverse. When ransomware essentially encrypts somebody’s systems, that's really hard… You say: “We don't pay.” But you don't say, I'm assuming: “Don't pay, we have a way for you to get out from under it without paying.” Is that correct?

I think in some cases there may be a way out from under it without paying, but in the majority that is not the way to go. So, you are having to effectively write off your current dataset, and this really leads to the importance of planning for that day. We talk a lot at the ISF about planning for cyber resilience. It isn't just about hoping for the best. It's about assuming that one day something is going to happen. You're going to be breached. You're going to be attacked with a ransomware attack, whatever it might be. You're going to have to rely on the backup plan. You must make sure it is comprehensive. You must make sure you have rehearsed it. And you hope that day will never come.

But the loss of data – if you are regularly backing up, keeping it separate and following a good approach to cyber security hygiene – means that you can get your business back up and running, albeit you will have lost a significant amount of data. But you won't have lost everything, so you will be able to recover to a certain extent. The importance of making sure that you've got the right processes, policies and procedures in place really can't be underestimated.

In the U.S., are companies required to disclose when they're breached, or do they do that as good public relations… or do they do it at all?

There has been a bit of a change in the U.S. If I was talking to you probably five years ago, maybe a little bit more, then I was hearing certainly from legal firms that were advising clients who had been breached. The clients were not taking the advice. They weren't notifying, even though they knew they had to in certain states.

I think that the world has moved on. We now have a much more stringent set of regulations in certain areas. You mentioned healthcare earlier. That is certainly there. If we look at personal identifiable information as it relates to European citizens for instance… the General Data Protection Regulation, that has a global reach. We look at some of the more recent laws that have been passed in California, for instance. So, I think the world is, as I say, moving on.

I do think we will get to a place rightly or wrongly where we will have much tighter regulation, where we will be required as organizations to report breaches within reasonable periods of time. In the European Union that happens to be 72 hours. Now we can argue whether that's a good number or a bad number. But it's clear what you have to do. And I think that is where we're headed generally in terms of breach reporting.

Why is that? Well I think that there is a gradual growing concern amongst the public, amongst individuals, amongst other organizations in the supply chains for instance, that we need to have this information. We need to know that our data has been compromised or lost so that we can do something about it. At a personal level, simple things like changing passwords, and the sooner you can do that the better. And that's just one of the drivers that are out there.

I chatted with a fella… and this has to be ten years ago, so perhaps it's outdated. We were talking about DDoS (Distributed Denial of Service) attacks, and his company mitigated against them. When you had one, they got you out from under it. I asked, “How do you do that?” And he said, “Well, unfortunately, we break the law. We wish we didn't. But we go out and attack all the machines that are attacking the server in question.” He said the law just isn't up-to-date enough for that to be something that legally we're allowed to do. But, of course, we're deflecting an attack. So, it's kind of all we can do. (A) is that still how it's done? and (B) would that still be illegal?

“Hack back.” Yeah, you must have a pretty high degree of sophistication to be able to do that. That is not something most organizations can do. We are seeing a lot of discussion and saber rattling, if you like, in that particular space from certain governments here in the United States. That is certainly the case in the United Kingdom, so it's similar as well. But that's at the government level. At the organizational level, you will have to rely on a third party that has that capability. And I think that wherever you happen to be in the world, there are different views that are taken as to the legality of that, and certainly whether the action is viewed as being defensive or whether it is something else. It's a very very complex area. It's not one that you should be going into, I would say, unless you really understand what you're up to. And, yes, you would need to have some sophisticated expertise on your side in order to do that effectively and make sure that you weren't making the matter worse.

I asked him, “What do you charge for that?” He said $100,000 and we’ll fix any DDoS. Again, this was ten years ago. Is that still the order of what it costs you to defend against that sort of thing?

It's an impossible question to answer, Byron. You need to look at the size, the scale, the scope and… How long is a piece of string?

Right. Right. Fair enough. Somewhere between a quart and a mile. So, let me talk about state actors. I was at a conference… When was Stuxnet, do you remember?

Oh gosh, that's going back awhile.

Ten years ago?

Yes. Something like that.

So, I was at a conference right after that happened. There was a person involved who was talking about it, and they asked, “Who do you think wrote Stuxnet?” [Someone answered,] “Well I think it's pretty obvious, the United States did.” The speaker asked, “Why do you think that?” The answer was, “Because it took a digital superpower to make that piece of technology, and there is at present only one digital superpower in the world.” I assume now that 10 years have passed, that is no longer the case. Is that true? Could any number of state actors now do something like a Stuxnet that attacked physical infrastructure and vibrated things at their resonant frequency… and was able to get it delivered and do all the rest?

There are certainly nation states out there that are probably more advanced than others. But I think it would be fair to say that if we look back 10 years and compare that with where we are today, there are more state actors that have a similar sort of capability. The Stuxnet thing was highly collaborative. It was planned meticulously over a period of time. There was a very interesting book that you may have read by David Sanger that goes into this in quite some detail. It's called “The Perfect Weapon: [War, Sabotage and Fear in the Cyber Age]” and looks at some of these cyberattacks which you are touching on now. It gives quite an interesting perspective on some of these things.

But I think it certainly would be fair to say a number of nation states have improved their capabilities in that particular space. We've seen some of those attacks. We've seen North Korea with the Sony attack, for instance. We've seen some attacks attributed to the Chinese in terms of what they've been up to very recently with putting chips into devices and so on. And, of course, we shouldn't forget that we also have the United States and the U.K. and some of the other European countries that are also active in some of these spaces too.

I usually avoid “highly in the moment” news things on this show because it has a long life. But what you just referenced, the chips that allegedly the Chinese were placing in hardware… that's been vehemently denied by Apple and others. How do you think that's going to shake out?

I think what it does is it raises a really interesting challenge... that I've been watching for a number of years now. That is, how do you ensure integrity of security across your supply chain? Because, if we take Apple as an example… the famous slogan “Designed in California”… they are assembled in a variety of places, from China to Malaysia and various other places as well. Now, the Apple supply chain… Tim Cook spent a significant amount of time putting all of that together, trying to ensure that it was optimized and so on. But very often when we look at an extended supply chain it becomes increasingly more difficult, the number of different partners that you're involving, to ensure complete integrity of security across that space… for a couple of reasons.

First, security is rarely at the top of the list in terms of the things that you look at. Traditionally, supply chains have been put together in terms of most appropriate cost in terms of the production, for instance, capability in terms of production of chips and so on. Security doesn't tend to figure highly on that. Now things are changing, but they're changing very slowly. Ensuring the integrity of security across the overall supply chain is exceptionally difficult. And this is one example of that. How do we ensure, when we do live in the society and the world in which we live in today, where we do have a number of different people that we collaborate with in everything that we produce, in everything that we manufacture... How do we ensure that we do have the right levels of security?

Frankly, it's an impossible question to answer. Each and every organization will need to take a view based on the risk profile they've decided to adopt and their own internal capability to actually go out and check on their security. Let's face it, that is the only way you can do it effectively. You cannot ask third parties to provide you with a sensible audit of their security posture because they will try to tell you what it is you want to hear. The only way that you can do it is by physically going and checking yourself. If you think about that, and you're a large manufacturer of any product, then you know it's impossible. So you will focus on those that are most critical to you, and that comes down to assessing your risk profile. That's a job for the board of directors to determine and obviously for the people within your organization to go away and implement.

If you were building… you're British correct?

I am. Yes.

And if you were building a weapons system on behalf of the British government to be deployed… in the United Kingdom. Is it even possible to secure a far-flung supply chain? Or would you just build it out of components that were manufactured there and assembled within sight and oversight?

I think when we start moving into that realm, then there are preferred suppliers that the nation states use, that have been used for quite some time and that are trusted. That doesn't mean to say that they can't be breached. That doesn't mean to say they won't be attacked in some way, shape or form. But there are, you know, a number of them that are on the preferred supplier list, that have been vetted, that are trusted. And I think that we're seeing some of that in terms of both conventional weapons and cyber weapons that are being produced. People are collaborating with trusted partners. I don't think that's any different from the way it's always been.

With regards to that one incident we were just discussing that’s in the news right now. Some people have maintained that a chip that size – because it was minuscule – couldn't do all the things that were claimed. Is that the case, or could you actually hide something that really was a back door? I mean, it's not microscopic to be sure, but it's so tiny as to almost certainly evade notice?

I think there are two things to that: The first is that when we look at the size of some of these things, size actually doesn't matter. We have been able to compress so much. Whether or not the information could be compressed at the size we're talking about, who knows. The second thing, of course, is that you don't necessarily need to store some of these things on the device. If you can connect it in some way shape or form to the Internet, then the world really is your oyster in terms of the way that you can then transfer information.

And this takes us into that whole area of the Internet of Things… I was talking to a very large bank not that long ago. They were telling me they had had a particular problem with their printers. They had outsourced… or they had allowed department heads to go out and purchase printers because the I.T. Department didn't want to get involved in that. It was only when they were running a pen test (penetration test), they saw that there was a lot of data that was being sent outside of the bank unauthorized. When they drilled down into it in much more detail, they found that it was coming from the printers. Why? Because the printers are fitted with the ability to communicate with the Internet with a standard password. The users hadn't realized that they could switch that feature off, or indeed that they could change the passwords. So the printers were just sending that data out of the bank.

There was no suggestion that this was an attack. There was no suggestion that it was deliberate exfiltration by a third party. It's a pure accident. But the point is that there are so many different devices being created today that we're almost taking for granted, that we don't even pause to think… Should I be changing the password on my home router, for example, or in the case of the printer, to preserve my information to keep it within my four walls.

And therein lies the problem. We're surrounded by devices that are constantly looking for connections, that are constantly trying to talk to other devices. And if you can have something that is relatively small, that is able to talk to other devices, then you can exfiltrate quite an amount of information. Unless somebody deliberately looks for that, you're not going to pick it up.

You're talking about devices… there's by last count 18 billion devices attached to the Internet. And I forgot who said that by 2025 it will be 80 billion. For the most part, these devices are not upgradable. They don't have software that can be patched. I just have this vague sense that that's an enormous security risk… to have these pieces of hardware that were rushed to market, are on the Internet and cannot be fixed… so that if a vulnerability is ever detected, it's like, “Huh, that's interesting.” You know, somebody could turn on every oven in Detroit with the push of a button. Is that a legitimate – not that example per se – but is 80 billion devices connected to the Internet that may or may not be able to be upgraded a looming security threat?

Yes, it is. And I think it is one that certainly a number of people, myself included, have been highlighting for some years now. The issue I personally was involved in was around smart meters, which at the time they were first introduced didn't have security built into them – the response being that that would make the smart meter prohibitively more expensive. So there's always going to be this balance, until we get to the point where you or I walk into our local mobile phone shop and say, “I don't care what the phone looks like. I don't care what its capability is. I want the most secure device you've got. By the way, I don't care how much it costs,” security is never going to be at the top of the pile.

Nobody really wants to understand something like that. They want to understand what camera it's got, what size data storage it's got, what it looks like, what color it might be… those are the sorts of things that attract people. So, the fact that we cannot retrofit security easily to a number of these devices does raise… potentially, a security issue.

And I think there is always the danger, when we talk about threats and security, of people coming away from it thinking, “We're all doomed and the end of the world is nigh.” That isn't necessarily the case. But we do have to have, I think, a very much more responsible approach being adopted by manufacturers… and it’s encouraging. I was personally encouraged to see the latest legislation, coming out of California again, that talks about having to embed decent levels of security into our IoT-type [Internet of Things] devices going forward. Now that's not going to help for what's already out there, of course. So, it will take years to replace some of those devices that are already in the market... if we can remember where they are and if we can find them. So, there is something of a looming threat that is out there. I think, from the corporation standpoint, the important thing is to understand where you've got these IoT devices. If you can understand that, then you can look at what data is being passed through them and you can make a risk assessment as to whether or not you need to do something about it.

At the personal level I think again we all need to think a little about whether we absolutely need our fridges to be communicating with whoever it might be, to have that extra pint of milk delivered. What would be the impact of that? For a lot of people it’s not really going to matter. We're not really going to be that concerned about it. For others, particularly if we're home workers and we have sensitive data that we take home, we need to look into it. Maybe we do need to create secondary networks that are insulated from some of the home IoT devices that we have across our homes.

So, that isn't a problem though… you said people should be more concerned, manufacturers should be more responsible, it would have to be top of mind for people, government should get involved… but right now none of that is happening. Right now, at least we're on a path that we just continue to plug things in. Is it getting better in any way, shape or form? Or is it: No, every new device you plug in is just another vulnerability.

I think that people are becoming more aware of it. I mean I was talking to somebody not so long ago who was citing an example of millennials in California. He said if you look at the way millennials are responding to technology… when they have a phone, one of the first things they do is cover up the camera… that kind of thing.

So, I think there is a general raising of the level of awareness that most of the devices we use are well-designed. They do make our lives easier. But our understanding of security and the implications has risen slightly. For me though, that's something of a generational thing. It must start in the schools. We have to be educating people much more about what they can and cannot do. We are going to have something of a lag in the market of raising awareness around security.

I think as well, over a period of time, that will compel manufacturers of devices and so on to make sure that they are slightly more secure. But for the time being we're just all going to have to accept that we are where we are, and if we're aware enough we will take control over some of that and then make some appropriate changes to the way in which we're using devices. Those that aren't, of course, will be running the gauntlet of a potential attack. Fortunately for most of us – despite the sorts of figures we talk about in terms of the amount of data this breach lost and so on – that still doesn't affect the majority of people.

So, the San Bernardino situation… when the [terrorist’s iPhone was recovered after the shooting]… The FBI and the United States went to Apple and said, “Unlock it. We want to see who this person was talking to…” and Apple said, “No.” And it's gone to court, and… I'm going to ask you two questions about that. That's still a debate society has… It's nothing we've figured out norms around yet. Where do you think that's going to shake out in the United States? How do you think that will shake out in the rest of the world? Where do you think it will eventually land 10 or 20 years from now?

I think the Apple approach – and other manufacturers would tend in certain cases to take a somewhat similar approach – to privacy, to encryption, is that perhaps we don't want to be able to crack that, perhaps we don't want to be able to do it, because that makes life a little easier for us. There is a huge debate around privacy that we're having around the world today, and this is just one example of that. I have no idea where it is going to end.

What I can say is that we're all going to have to get used to having a lot less privacy than we have had in the past, for a variety of different reasons… one of which is that we have a propensity to share intimate details on social media on the assumption that we are only sharing it with our friends or with people that we trust. That patently isn't the case. We need to adapt to that or we simply need to say, “Well, actually, I don't care.” So, we have this propensity to share.

We have a propensity to produce masses of data on an ongoing minute-by-minute basis across a number of different devices at speeds that we would never have imagined even a small number of years ago. That is making sure that the information that is being shared is much more readily and quickly available. And so, the old ability, for instance, to pull back something you might have put up or sent in an email… you could recall it, that kind of thing… Those days are fast gone as well, because now when something is sent, it's sent, and it is being picked up and – if it's salacious enough – it will be tweeted around the world.

So, the whole issue of privacy is one that is undergoing a revolution. You have some regulations that are being brought in by certain governments around responsibilities of organizations and so on, but that isn't really affecting the individual. And so, I think that we're seeing a societal change in terms of how we view privacy and what we value. That is different across different generations, so it's going to be very interesting to see how that all pans out over the coming years. Because we've got so many different things that are changing. We have the legal requirements you've just mentioned. We have social norms. We have personal views. We have a real mixture of all of that.

I have no idea where it's going to end up. All I can say is that we're not going to have as much personally private information that we can withhold from sharing as we used to have in the past. And my own view is we're probably just going to have to get used to that.

But that's kind of a choice, right. You could encrypt your emails with public key encryption. You could use a phone that has… It seems like it's just not as important to people. Is that your read on it?

I think so. This desire on social media in particular… the cult of celebrity. People share information you wouldn't normally have considered appropriate five to 10 years ago. Society has changed very significantly. You're right. You could encrypt everything. You could go off grid. Indeed, some people have deliberately done that… trying to opt out. They don't use social media, don't share that kind of information. But they are in the minority.

We teach people – if you think about our university system for instance, particularly here in the United States – If you happen to be a student going to university, you have access into a university network across the United States. You can log on. You can share information. We teach people to collaborate, to share different ideas. And there is a bleed from things that we might be sharing from an academic nature therefore into the personal nature, because you always want to know who the individual is, you want to trust them and so on.

There is something that happens to us when we go onto social media that makes us think we're only talking one-to-one, or only to a group of people that we know, that we actually trust. And that obviously isn't the case. Once you put stuff out there, it can be retweeted and sent on. And so that's a different psychological state that we enter when we go into a social media environment, and we shouldn't forget that kind of thing happens. It’s all very well to say: “Well, you shouldn't be doing that. You shouldn't be posting that kind of thing.” The reality is that when you go in, you get sucked into it. You see the latest tweet from whoever it might be… your favorite celebrity… and that's attractive. You, perhaps in your own small way, try to emulate that… because it's good practice. Those sorts of things begin to affect our view of privacy and what it is we're prepared to share.

It's interesting because I had a guest on my show who made a prediction I'd never heard anywhere before. He said that you never had a presumption of privacy for the longest time. You lived in a community of two or three hundred people and everybody knew everybody's business – like the Victorian era. And he believes we're going to return to that same practice that the Victorians had, which is you know all the bad stuff about people because you saw it on social media, but you're polite enough to never mention it to them in person. So, it's like you always knew it, but social norms were that, when you saw them, you didn't mock them or mention it or anything. You're polite enough just to pretend like it didn't happen. I know I just put you on the spot, but don't you think that's at least a possibility?

I think it's a rather quaint and rather nice solution to the problem that we've got. I think the challenge with it is that today we live in a global village and you will have people that you connect with from the United States, from Australia, from India, from Europe. So, your village is indeed as I said completely global. And so, to imagine that we're going to get to that state where the Victorians were able to do that in a relatively small environment, and indeed in very small social circles, I think as I say is rather nice. I'm not so sure that we'll get there. I don't know where we will get. And so therefore that view is as valid as anybody else's. Frankly I don't have a crystal ball on that one. But it is relatively appealing, isn't it?

Let's talk about [privacy]. You used to have some amount of privacy… and let's talk about privacy from your own government for a moment. You used to have some amount of privacy because there are just so many people. You can't follow everybody. You can't listen to every phone conversation. But, of course, AI can take every phone conversation, transcribe it and data mine all of that. AI is as good at reading lips now as a person. Every camera, even without audio, can record what everybody is saying – at least facing them. The device that transmits your location from your phone… every word you type in an email… it all can be mined with the same tools we're using for very noble purposes, like finding cures for diseases, etc. So, is there any… and let's talk first about a country like the U.K. or the United States, where you have more transparency, the rule of law and the rest… In that kind of a society, should you have a presumption that nothing is listening to what you're saying? By law, should you have that presumption? Or should we just take for granted that everything is going into a database somewhere to be studied for what is extensively a public good or a public purpose?

I think that there are moral, ethical and a whole range of different issues all wrapped up in that. Artificial intelligence is still in its infancy. It does have the potential to provide a whole range of potential benefits. When people ask how I define artificial intelligence, for me it's about outsourcing. You are outsourcing to machines – to computers – the ability to draw conclusions from vast amounts of data or to automate at scale. What we're able to do in most instances today of course is actual machine learning, where we're feeding more and more information into the machines, where they're working through algorithms and so on. That's been used very effectively in terms of assessing my buying patterns, what it is that I'm likely to do in my response to advertising and so on.

As a professional marketer, I get very excited about that because it allows me to understand my customer far better. And then you get into, “Well, hang on a minute, are we now going to monitor – because we do have the capability – are we now going to monitor everything that is done?” And traditionally, there will then be the argument of national security put forward… you know, terrorism and so on. If you look at certain nations, however, then we know we shouldn't underestimate the cost associated with putting some of these systems in place and indeed the ability to act on that kind of information within very short order.

This is where for me AI does potentially have some very valuable uses going forward. But it’s not there yet. And so, we must continue, I think, to develop this in a long track, if for no other reason than for national security… because we do live in an increasingly dangerous world. And we do have to rely on our governments to protect us as citizens. And I think that does mean that we must give up some of that freedom of operation.

But there is a trust element in that, and I don't think in certain countries we have recovered still from the perception of an abuse of trust – in the way in which governments have collected information, the way that perhaps it's been it's been used. And so, I think we need to move through the environment with artificial intelligence where there is a higher degree of transparency, a higher degree of explanation, to try to rebuild some of the trust that should exist between government and citizen to allow some of these things to happen. I think that AI does have significant benefits to deliver in that particular area. But I think we need to get it right, and I don't think we're there necessarily yet… and we won't be until we have a much more transparent environment where we understand what information is being collected and indeed why.

Does an organization need to listen to every phone conversation that I'm having? Probably not. Do I really care if they do? Probably not either, frankly, because I've got nothing to worry about. That's one of the arguments that people put forward. If you've got nothing to worry about, why would you object? Well, you would object because potentially things go wrong. And the challenge is that we still trust and believe in technology, and we still believe that machines don't make mistakes. The reality is that they do. And so, we must have some means of balancing that out, and that goes back to my point about transparency and trust… and recourse that we have as individuals should things go wrong.

What are the simplest things that the average person listening to this should do, simple things we can do that will dramatically, hopefully increase our security? What's your advice when you're sitting next to somebody on the plane and they ask, “What should I do differently?”

I think there are a number of fairly basic things that we can do that that are standard hygiene things – like you get up in the morning, you clean your teeth. For most people, you don’t think about it, It's just something that you do. So, in that category, what should you be doing? Well, of course, you must look at password protection around devices you're using, whether you use a biometric or whether you use a very complex password. If you're using a password that you've created, then make sure you've got a number of different letters and numbers and symbols in it.

But make sure it's something that you can remember. I'm not going to tell you that you need to have a different password for every system. You're not going to be able to do that. That's farcical. You'll never remember it. So, what will you do? You write it all down and then you'll lose it. That isn't going to make sense.

One of the other things that you need to… and it's interesting you mentioned on a plane… you need to be very much more aware. I travel a lot, as you probably know. I was on an airplane just a few days ago, sitting behind somebody who, in front of me once we had just touched down decided they were going to check their bank account. Without consciously having to look, I was able to get the details of their bank account, their name, their bank account number, their sort code and a full readout on exactly what transactions they had undertaken on that particular account over the last week. They then flipped to another account. So now I've got two accounts with the information. Had I wanted to, I could very easily have taken a picture of that and I could have done something with it. So that's about awareness. Understand that maybe it isn't appropriate for you to be looking at what I would term anyway to be relatively sensitive information in a public environment where people might just be able to shoulder surf and take advantage of you.

I think the other thing I would say is to be aware again if you are accessing sensitive applications of how you are doing it. It's very convenient for us… for instance, we're sitting in a coffee shop, we're sitting at the airport, to use the public Wi-Fi. Probably not a good idea to be accessing your bank account details in that instance. A lot of data is stolen in those environments. If you're just accessing simple emails or perhaps you’re just web surfing, fine. But just think it through. Think: Would I want whatever it is I'm accessing to fall into the wrong hands? And if the answer is no, don't do it. A lot of it is just to stop and think.

And finally, phishing. We are seeing an increase in phishing. If you see an email that looks too good to be true, it probably is. If it's coming from somebody... even that you recognize from a name or address point of view, just think whether that makes sense. I had one just the other day from somebody I know very well who allegedly was sending me a Messenger message. I don't use Facebook Messenger, and so it's very unlikely it would actually be a Messenger message from this individual because they know that I don't use it. So, it's just little things like that, I suppose… raising your awareness and just looking twice before you cross the road.

Do you remember in that movie Spaceballs, they're trying to get the password to the planetary defense shield, and they kidnap the president's daughter, and he reveals that it's 4321. And they're like, that's what people use on their luggage. I noticed that that password, maybe it was 1234… whatever it was, that password is still in the top 10 list of passwords people use.

Yeah.

So, I'd like to close. Tell us a little bit about your company. What do people call upon you for, and what do you do on a day-to-day basis?

The Information Security Forum is a nearly 30-year-old company now. We're a not-for-profit. We're headquartered in the U.K. We offer a range of services to members who are members of organizations. Typically, they are multinational organizations from the Forbes and Fortune [lists]. We provide them with research, with cybersecurity tools, with access to our analysts in a small range of consultancy services that allow them to adopt policies, processes and procedures that make them safer online, that improve their cyber profiling and their cyber security.

We work collaboratively with many of the standards agencies – with Nest Labs here in the United States, with ISO [International Organization for Standardization], for instance, and others… providing specialist input into standards that are being produced… and providing our opinions in those particular areas that are taken from working with organizations around the world, with some of the smartest individuals there are in cyber.

We are in a fortunate position, because we do have that ability to work with very clever people who are at the forefront of what cyber is all about. And then to work collaboratively with them in terms of implementing cyber resilience programs and ways in which they can improve their security posture, not just for their own benefit but also in a lot of cases for the benefit of their customers and their users.

So that's the ISF. What do I do on a regular basis? I do things like this. I also spend quite a bit of my time meeting, as you would expect, with our members, with potential members, with third parties. I'm active on the conference front as well, talking about the sorts of things we've just been covering. But also, I think, trying to raise a level of awareness of security and provide some insight that hopefully organizations and individuals will find interesting, that again help them to just pause and think about how they're using technology and how they might become safer online. I think that's the essence of it.

All right. Well I think it's a great place to leave it, I want to thank you for a fascinating hour talking about so many different topics.

It's a pleasure, Byron.