Adam Myers:
Hello and welcome to our podcast Tales from the CyberLab. My name’s Adam Myers, I’m the Sales Director here at CyberLab, and I’ll be your host for today. Joining me is Kevin Leusing from Proofpoint. Kevin, welcome.
Kevin Leusing:
Thank you, Adam. I’m excited to be here. It’s a great topic for us to talk about. I am the Chief Technologist for EMEA for Proofpoint, and head of the office here in Cork (NI).
Adam Myers:
I was going to ask you what you do for your job and your role, and you’ve explained it beautifully, so thank you so much. We’ll be discussing today around data loss prevention. I guess with the world of AI and how data loss is being shaped, a lot of people are sort of making those big steps to the likes of Copilot – data is everywhere, and what we’re seeing is that loss doesn’t just come from hackers. It comes from people mistakenly maybe sharing information that they’re not aware of or the classification of some data sometimes, and we’ll talk today a little bit about what’s at stake for an organisation if they don’t modernise. So with that, I’ll kick off our first topic. So I guess, Kevin, why should DLP start with people rather than just policies and premise of controls?
Kevin Leusing:
At Proofpoint, we think about people as the centre of the universe in security and data loss, most incidents are caused by people, they begin with the people, and whether it’s their malicious or compromised or careless, it’s human events that cause data loss. When you look at what UK CISOs are saying, 60% of the CISOs are saying that the greatest risk in their environments are the people. And so we think about how do we protect the people? We talk about the classification, the behaviour, understanding what they do, understanding where the data is, and understanding how the people interact with that data. One of the key things we say very often is, “Hey, data does not lose itself.” People are the ones that are moving it around. People are the ones that are being a careless user, for instance, might just, they’re trying to get their job done. They just try to be active and quick, and they make a mistake. And malicious and compromised users, we see the same with them.
Adam Myers:
I think, like I said, in today’s world, everyone’s just trying to move from A to B faster. And within your, like you said, you’re just trying to do your job most of the instances, and you just won’t be aware that potentially what you could be sharing, it could be an email that you attach the wrong folder or file to, and potentially you then might have breached data and you might have shared with people you copied somebody in. There’s a lot of things there that just people aren’t necessarily maliciously trying to do things. It’s just that potentially the speed of what we’re trying to work with now, we’re working remote, we’re not in the office all the time – I guess that’s just so easy to do in today’s world.
Kevin Leusing:
Oh, absolutely. In fact, the case you just mentioned – misdirected email – it’s actually a primary vector for data loss. I’m communicating with a company and I’m going to send an email to Adam Jones and I attach the confidential file that I want to send to Adam Jones, and it turns out I click your name instead, Adam – and I send that confidential information for a customer of mine to you. Now, that’s a big vector – happens all the time. And yes Proofpoint does help solve that.
Adam Myers:
I think the way that you, previously from what I’ve seen is a real core technologies that you align email and DLP together, don’t you? So email these probably a big way to be hit with phishing, but also it’s a really sensitive way to share information. Potentially you could expose a business by sharing a file, for example, in the first example, but also that bleeds into data loss prevention and how we can set parameters there. I guess that’s how your business model works. I think it’s brilliant that you bleed the two together.
Kevin Leusing:
Yeah, thank you. And it’s not just email, it’s all social channels, it’s SaaS applications, it’s web applications, it’s all of that. We started off as email. That’s where we made our bread and butter. But as the work environment has changed, we have changed as well to adapt to these channels.
Adam Myers:
So as we maybe talk around inside of personas, we’ve got three models there, careless compromise and malicious. How do you tackle the three insider personas? What does Proofpoint do to, I guess look after those areas?
Kevin Leusing:
And they really are three distinct types of personas. With the careless user, again, these people are just trying to get their jobs done. They’re just trying to be expeditious about what they do. And so you don’t want to get in the way of them doing their work, but you do want to monitor them so they’re not losing data in ways they shouldn’t. And so even just simple things like pre-send coaching and pair that with targeted training, it’s going to reduce the errors that they make as they go through this. So watch them in real time, understand what they’re doing and prompt them, nudge them to do the right things. With compromised user, it’s really about the identity, who this person is, what they’re doing, looking for anomalies and how they do detecting the exfiltration patterns that hijacked accounts typically present. So you look at those from that. On the compromised side, on the malicious side, now we use the word malicious, but this could be someone, it’s a lever, someone leaving the company and they did some good work while they were here. They want to send some of that work back home so they have it with ’em. And so put them on watch list, understand what they’re doing, least privilege access is key in these kinds of situations as well, understanding what they’re doing. But I think the big thing that companies need to do is think about who their highest risk users are and watch that, look at that cohort, keep an eye on them. Of course, you won’t look at the entire enterprise, but put a special focus on them because that’s where the data, the biggest reward for watch these folks is.
Adam Myers:
And I guess with malicious, you’re probably looking at an insider threat profile, aren’t you, where you can sort of score those people, they start to sort of make suspicious patterns at three in the morning from a reverse proxy out of a different country. I guess it’s spotting those things and the tool itself can actually aid people and staff to maybe look at ways to look for those people that might be within the organisation.
Kevin Leusing:
Absolutely. And endpoint tracking is critical here. You’ve got to have an import product that’s looking at what a user is doing on a device. And in the insider threat tool for instance, we can also look at the details of what they’re doing step by step. So let’s say they found a confidential file they want to send out, they can take that file, they’re going to try to send it to their personal account. Oh, you know what, that’s going to get blocked because we look at personal accounts. So hey, going to change. I’m going to try to put it on a USB drive. Well, we block those from the happening, I’m going to just do a screen print. Can’t do that either. And so you’re looking at all those activities that a user is going to try to do. As they’re doing that, you’re also building a case, understanding who they are, so you can go back with them and take disciplinary action if necessary.
Adam Myers:
Yeah, I guess that stamping isn’t it to say, this happened at this time and we’ve got proof and evidence of that. I guess that’s what the tool can really help with. If it does go further, really key part to that technology. Amazing. So just onto our third topic, so email is still the front door for data loss. What practical governance steps could misdirected and mis attached data?
Kevin Leusing:
Governance is so critical in doing, you’ve got to have a programme built out around how data loss happens. So pre-send checks for external recipients, look at sensitive attachments, look at hidden distribution lists. You want to provide nudges to the users as they’re working through this to show them what’s going through. You’ve got to tighten the rules as well for those destinations. Personal web mail here at Proofpoint, I can’t say to my personal web mail account without going through a browser, a isolation browser, you want to look at the approved and unapproved cloud services and put governance around what those services are all about. But the key thing you’ve got to understand is where your risky file types are, where your classified information is, and put rules around what those look like, including password protected archives. Often tools don’t have visibility into those password protected files, so you got to put that around there. So the big approach here is you’ve got to combine the governance with coaching, teachable moments and ease that user frustration by giving them timely information.
Adam Myers:
I guess just with the classification data, I guess where AI is maybe helping a lot of organisations now is that, Kevin, you and I will probably classify data differently. I might put it as sensitive, you might have it as public facing, and I guess that’s a bit of a risk to a business that we are, again, not meaning to, but we might be classifying it different where I guess that automation now is the way that AI’s helping with classification. It’s a really good starting point for business to get a framework in place so that they can start that journey, I guess.
Kevin Leusing:
Absolutely. AI is a game changer in the data classification surface. It’s an area where if you get the right types of classifications together, if you get the right movement together, if you understand what a company’s core business is, what their file types are, who they communicate with, where that data resides, using AI to do that and put the classifications on, humans are going to classify, as you said, differently – we’ll look at the same data, and you may put it as confidential, I may put it as proprietary, but the AI will give a consistent view of that type of data, which humans can then review and reclassify as needed. But the reclassification is much less of what a human effort would be.
Adam Myers:
Speed, I guess, is much a bit like if anyone’s done any sort of data projects, typically you look at that first and go, where do we start? I guess it’s the speed of which you can do that now, and I guess that’s where AI is definitely going to help organisations. And as we take that big step maybe into agents and co-pilot, again, I think you guys are so well set up for delivering co-pilot into a business securely.
Kevin Leusing:
Yep, absolutely. When you think about agents and you think about DLP, often we get incidents from these DLP reports, humans have to go through that. Well, hey, what if I could build an AI agent that’s going to go through that first so they can focus in on, so I as a human, can focus in on the higher priority, bigger risk items, let the basic triage happen with an agent.
Adam Myers:
So just on our fourth topic, so what does a unified data security programme look like when you bring DLP & DSPM and insider threat together?
Kevin Leusing:
Yeah. This is an area where it’s been really exciting being here, Proofpoint, because we’ve built some things organically and we’ve acquired other technologies to build this altogether. So when you combine D-L-P-D-S-P-M, data security, posture management, and insider threat, when you combine those together, you’ve got insight into both content and behaviour. And by doing so, you can reduce the false positive on these. It’s really an interesting way of seeing the entire universe. And you put that then into a unified workflow dashboard and you can see the incident, you can see what’s happening. You can see the entire flow of data through. So DSPM, you’re going to discover and classify that sensitive data, whether it’s on your premise, in cloud stores, on hard drives, wherever it is. We can find that. Then you’ve got to get the response playbooks together, understand how you respond to each of the exfiltration attempts or any other kind of alert you’re getting. And then the whole goal for this is you want to move from checkbox compliance, which the old vendors have all done. It’s the old Reg Xs, the old patterns that you looked at – checkbox compliance, and now you’re going to truly reduce risk by putting a programme like this into place. It gives the CISOs/security leaders insight into what data do I have? Where does that data reside? Who’s touching it and why they’re touching it? And so we bring the silos, we bridge those silos, and we’re giving organisations clarity across endpoint, cloud & email.
Adam Myers:
Somebody’s listening today and they’re thinking, how do I start this project? Potentially it’s something that maybe they’ve considered and they’re thinking, well, that’s the next big project we’re going to deliver. I guess DSPM is a good way to start that, isn’t it? In terms of get an idea of what that looks like, a snapshot of your data and start that journey. Is that where you would say, if they were listening, how would you start this sort of, I guess, data project initiative?
Kevin Leusing:
Yeah, it really depends on the organization’s maturity and willingness to take on a big project. A-D-S-P-N project is a big project. AI reduces that piece of it. Typically, our customers start with email, DLP, and there we do use regexs. We do use some of our intelligence databases and large language models to identify what the context of messages are, so we can reduce false positives that way. So we start off with the email DLP, they will expand out to an ITM type programme and then add the DSPM on in that the three work in conjunction. So you could start any one of those layers, but we typically see email, insider threat & DSPM.
Adam Myers:
We typically, anyone is listening, please reach out around. We can definitely help with that initial project in how you’re trying to deliver within the business. So listen into our fifth topic. How do hybrid work, sorry, I’ll start that again. This onto our fifth topic. So how do hybrid work, cloud collaboration, AI usage change data governance?
Kevin Leusing:
Yeah, as we talked about earlier, Adam, AI is absolutely changing the game in everything we’re doing, and the risks are unbelievable. Three out of five UK CISOs worry very significantly about the customer data loss via public gen ai, a worker’s trying to get his job done. He takes something from the inside the walls, copy that, paste it into an AI gen AI tool, trying to understand what’s happening with that. That’s a loss factor, right? You could lose some of your AI that way, some of your data that way. Lemme restart that. You could lose some of your data that way by not understanding what the AI tools are actually doing. So you want to imply consistent controls across SaaS endpoints, web AI tools, et cetera. So you can classify and monitor and then coach those users in how those workflows go and what the risks are. So the policies have to reflect where people work today, how people work today, and what data they’re accessing today. And the quick wins are really coming from the visibility. We talked about the classification of data. The more visible you can make this to your security leaders and your security response team, the better your results are going to be. There’s no doubt that AI adds velocity and complexity to our environments. So corporate governance has to ensure that the sensitive information isn’t fed into those tools, keep the data safe, and our people-centric approach helps ensure that the technology matches into the governance.
Adam Myers:
Just something that we’re actually doing here at CyberLab is, as some users may have listened to, is our Live Hack. We’ve done it at SecureTour. We’re actually evolving that into Copilot and how maybe when we gain access to a Compromise User, how we can use Copilot to potentially give us sense information. So that’s again, goes back to classification of data, how we put the guardrails on data and make sure that if I was to ask for, I dunno, a salary of a senior leader, that it’s not going to give me that information based on what file access people have. So those guardrails are super important, but if anyone is interested, we are doing that as a live hack and that will build on from something that we’ve done in the past at some of our live events. So feel free to listen and we’ll be bringing that to people in October. So something to be excited about. Good stuff. I’ll just wrap things up and then I think we’re good to go. So Kevin, it’s been absolutely amazing to listen to you today and I’ve learned a lot. So if there was one key takeaway you could give our listeners today, what would that be?
Kevin Leusing:
At Proofpoint, we look at all of this as we started off with human-centric people are of the core of everything we do. So you’ve got to encourage your employees to pause before they send, think about what they’re doing, especially when they’re dealing with sensitive data and within the walls of a company. I treat everything as sensitive. So you’ve got to have employees think that way. First, you have to reinforce the culture that one careless click can have serious implications. One click of an email. Sending an email with confidential information can have serious implications to your business. You have to build a culture where data security is everyone’s responsibility, not just it. If you have to get your employees to really embrace their responsibility in preventing data loss, and then just remember Proofpoint, our commitment to protect people and defend data. That’s what we’re all about and we’re here to help that.
Adam Myers:
Amazing. So that concludes this episode of Tales From The CyberLab. Join us next time for a brand new episode. Until then, Stay Secure.