AI’s role in data protection isn’t just about automation – it’s about safeguarding sensitive information before it’s compromised. With data privacy concerns growing and AI misuse on the rise, organisations must adopt AI-driven solutions to stay ahead of risks.

Tales From the CyberLab: Episode 10

AI's Role in Data Protection Explained

with Stuart Wilson, Senior Manager Sales Engineering at Forcepoint

AI’s role in data protection is critical for businesses today 🌐

Stuart Wilson from Forcepoint joins our new host, Adam Myers, to discuss how AI impacts data security, including:

✔️ The risks and benefits of AI in protecting sensitive data
✔️ Why AI models depend on quality data to function effectively
✔️ The growing threat of shadow AI and how to manage it
✔️ Key steps to safeguarding data when using AI into your business
✔️ Balancing innovation with security in the AI-driven world

Listen on Spotify

Meet Our Guest

Stuart Wilson

Stuart Wilson is a Senior Manager of Sales Engineering at Forcepoint, where he helps organisations navigate the evolving landscape of cyber security, with a particular focus on data protection and insider risk. With deep technical knowledge and a pragmatic approach, Stuart bridges the gap between business needs and technical solutions — advising on how to build secure, resilient environments in the age of AI.

With years of experience working across sectors, Stuart is passionate about helping businesses understand and control their data, reduce risk, and make informed decisions about deploying new technologies safely and responsibly.

Meet Our New Host

Adam Myers
Sales Director, CyberLab

With over 14 years of experience in Cyber Security, Adam Myers leads a team at CyberLab dedicated to protecting organisations from emerging cyber threats. As a senior leader, he guides the team to deliver proactive, layered security strategies and managed detection and response services.

Adam has worked across various sectors, including the NHS, education, and enterprise, helping organisations achieve their goals while enhancing cyber resilience. He’s committed to raising cyber awareness and simplifying complex security topics, offering expert advice on the latest cyber threats and trends.

Episode Transcript

Adam Myers

Hello and welcome to our podcast: Tales from the CyberLab. My name is Adam Myers. I’m joined today by Stuart Wilson from Forcepoint, and we’ll be discussing AI’s role in data protection. For those that don’t know me and for our regular subscribers, I’m a new host. I’m the Sales Director at CyberLab, and I’ve worked in technology and cyber sales for over 14 years. Stuart, do you want to tell us just a little bit about what you do in your day-to-day role at Forcepoint?

Stuart Wilson

Yeah, sure, Adam. So first off, thanks very much for having me here as part of the podcast today. It’s great to take part. So I look after the pre-sales function within Forcepoint for the entirety of UK and Ireland. Essentially, that means I’m working with customers in prospects to make sure that the Forcepoint technology stack is a good fit for those organisations. Now I’ve been in Forcepoint for around about 13 years, so that’s a lot of customers and prospects I’ve seen over that period of time working with you guys very closely throughout that period as well. Forcepoint, for those who don’t know, I mean traditionally would’ve been well known as Websense back in the URL-filtering days and web-security days. And then more recently, Forcepoint has been well-recognized by analysts, customers, partners for our capabilities in the data protection space. So traditionally DLP and then branching into more recent technologies such as DSPM with things like DDR capability that I’m sure going to come across in the conversation today.

Adam Myers

Yeah, it’d be interesting to talk me through that. So I think I’ve trumped you by one year, just with a 13 to 14. But yeah, we worked together a very long time. So that probably just comes onto a little bit around about CyberLab and Forcepoint’s relationship. So yeah, some of our largest customers we worked together on, especially in public sector enterprise sales. So I think first of all, thank you for your support over the years. I think we’re 10, 15 years into that relationship, and thank you to everyone at the Forcepoint team and especially for joining us today. So AI and cyber security – a bit of a buzzword, I guess. There’s obviously, I think AI’s been in cyber security for well over 10 years and I think we’re seeing a big transformation and a big step now with where AI is heading. So I guess one of the topics we’ll probably just discuss today is how is AI transforming data protection, I guess, including threat detection and response capabilities that more and more clients are looking for now? So I guess can you just maybe expand a little bit on what you are seeing and the team at Forcepoint?

Stuart Wilson

Sure. I mean, you touched on a good point there, right? So AI has been around for a lot longer than people give it credit for. So we are talking currently about this approach, regenerative ai, which is the new kid on the block, been around for a couple of years now, but it’s predecessor, so things like machine learning and deep learning, have been around in technology for a long time. You had a reference 10 years or so. Well, for Forcepoint, it’s exactly that. We’ve had things like machine learning in the data security product for that duration of time using machine learning to understand what data looks like for an organisation, and then using that knowledge to protect that data as it’s being exfiltrated out of an organisation. So those kind of techniques and then the more recent generative AI side of things, is really just enhancing that capability, making it more complete, more capable, quicker at doing things. So making things much more real time in terms of what’s on offer from technology.

Adam Myers

Yeah. Is there any examples, we talked about a few of the more historic ways that AI is being used. Is there anything you are seeing as big jumps in technology that maybe some of our customers could benefit from? So is there any sort of thing around identifying patterns or detecting abnormalities? Is there anything there that we maybe are not aware of that you can maybe just shed a little bit light on?

Stuart Wilson

Yeah so from a Forcepoint-specific perspective, and I’m sure we’ll come with this in more detail as we go through the rest of the conversation, capabilities like DSPM’s or Data Security Posture Management, for those who are not aware, so kind of combining an ability to understand the posture of your data, but also technologies like classification or AI is very much an intrinsic part of that modern day solution for looking at classification. So an ability to automate mundane activities that users are not particularly great at doing themselves and automate that through looking for those patterns, looking for those different types of data and making matches against them, and then applying classifications based on how AI sees it.

Adam Myers

So what I see, Stuart, is a lot of customers probably struggle a little bit with the classification of the data and where to start. So when I talk to some of the larger enterprise accounts, where is that point? They probably got data that’s archived for years and years and years. Where would you start and how can Forcepoint no help with that?

Stuart Wilson

Yeah, it’s a good rationale starting with classification because as you say, people have built up data long period of time that data they want to put into different repositories moving forwards. But if you don’t know what that data is, that’s kind of a risky business. So using classification, using AI driven classification removes a lot of that kind of mundane user activity. Often people get a bit touchy when you talk about AI and the impact they’re going to have on people’s roles, but when you take something that people aren’t very good at, don’t enjoy doing and can add risk to the business, and when you replace that with AI capability, then actually I think it becomes a really compelling argument to use it. So Forcepoint D-S-C-P-M is doing exactly that. So it’s trained on an organization’s data, so it’s not just generic classification, it’s actually specific to that organisation and that could often be that great first step for people. Forcepoint are really well positioned, though, to complete that entire conversation across the data life cycle. So some might want to start with classification, some might want to start with actually I’ve got immediate risk and I need to maybe stop suffering exfiltration out the organisation or make sure the right people are able to interact with data, but other people can’t send that data out. And again, that’s the heritage that we’ve got. DSPM being the new stuff, but all that data security and our data security in the cloud, that’s kind of what Forcepoint has been really well known for.

Adam Myers

Because I guess how we would classify data is very different. You might put some of those public, I might put it as confidential, and I guess AI is just going to keep that standard and we’re going to maintain that throughout a business. So I do think it is a big improvement. Just a bit on DSPM. So is that a snapshot of using the actual company’s data, not demo data for example, is it actually their live data that you would give a little snapshot of for example?

Stuart Wilson

Yeah, I mean DSPM is looking at a repository of data and organisation has maybe in a cloud service and it’s actually going through and using AI technology to go through and understand what that data is and then kind of compute the risk or the posture situation for that organisation back via the use of dashboards, etc. So people can actually understand, look, what is our posture and what is the risk to the organisation through misclassification of data, through oversharing of data, and so on and so on. So again, that’s that great starting point. If you don’t know what your data is and where it is, some organisations will say it’s impossible to then do anything with your data.

Adam Myers

Valid point, valid point. Around AI’s role with instant response – I think a lot of organisations are taking their instant response planning a little bit more seriously – they’re testing it. Where do you see that evolving through the developments of AI?

Stuart Wilson

Yeah, so again, it’s the response. We’ve been using technology as akin to AI for quite some time. Forcepoint specifically things like incident risk ranking that we had in the products a couple of years ago. So taking that collection of individual pieces of data and actually trying to make sense of it through an automated fashion. So it’s not something that an individual couldn’t do and achieve themselves much like instant response, but it’s something which can be in a much quicker, much slicker, much more accurate when you’ve got technology like AI stepping in and taking part.

Adam Myers

Yeah, amazing. So how does Forcepoint use AI to enhance user-focused security? So probably into our next sort of topic around that approach, what are you doing I guess in that?

Stuart Wilson

So I guess you probably look at this in two different ways. So on that pure DLP front or traditional DLP front. So Forcepoint should use something called ‘risk adaptive protection’. Again, that’s a few years ago. Risk adaptive protection is focused around looking at indicators of behaviour whilst not maybe AI in the same sense as we’re talking generative AI, it’s still that computing of data and computing of behavioural information about an individual to assess what the risk is that they’re posing to the organisation of the data. And if that risk is assessed to be great enough, then using technologies like Risk Adaptive, you can kind of ratchet the DLP controls to take them from a position of, look this user’s low risk and they’re able to just get on with their job to actually some of the behaviours they’re exhibiting are risky. Maybe we need to start prompting that user, giving them some advice and some guidance, and ultimately we can move through to a position of maybe blocking that user. So actually removing their ability to interact or transmit that data outside of the organisation. That’s on the one side. The second side, we were just touching on, so DSPM – that’s where we see the real attention and focus at the moment. Lots of organisations want to understand their data much more than they do today. DSPM, as we’ve said, it’s that brilliant first starting point for people. Go through, assess your data, know where it is, classify your data and understand what your risk is. And then from that point onwards, start remediating. And there’s a technology laid on top of the DSPM, so something called DDR, so Data Detect and Respond, and DDR is about doing that or trying to achieve that in near real time. So rather than looking at all the historic data and scanning all of that historic data, which can be a timely process for any organisation, DDR will focus on looking at newly created data and data that’s actually in use by individuals in the organisation. And it would apply that same classification capability and technology to make sure that your most high traffic or high risk data, the stuff’s being used is getting that immediate protection.

Adam Myers

Just going back to a little bit around insider threat. So very large organisations typically from a risk perspective, they are most instances worried about insider threat. Do you see that still being a real prevalent thing and AI is going to help with that? I think you mentioned a little bit around watching for behaviours at 3:00 AM I’m starting to access folders and files that I don’t usually do. That risk profile I think is a really interesting thing that Forcepoint actually offer and the visuals of it paint a very good picture for people in terms of what that might look like. So do you still see inside threat as a real big risk to large organisations?

Stuart Wilson

Yeah, 100%. And the way you just described it there is that’s great. That’s exactly what risk adaptive is for detecting that anomalous behaviour about an individual. I think the bit that people maybe don’t forget, maybe don’t appreciate is that an external risk can present itself in exactly the same way as that insider risk just because the individual inside the organisation doesn’t know they’re taking part in something dodgy. They absolutely can be, right? Whether that’s harvested credentials through social engineering etc, or whether it’s just that kind of inadvertent inside traditional insider threat, right? Somebody’s trying to do their job more efficiently and unfortunately opening the business up to risk. So yeah, I mean risk adaptive will look for that change in behaviour. It’ll understand what’s normal for an individual and when it deviates, that’s when the risk score starts to increase and that’s when the protection can start to increase. But the key thing with risk adaptive, it’s there to make sure that the business is able to work in the way that it needs to. So we always tend to get round to the conversation around blocking this and blocking that. Actually risk adaptive is there to enable the people who really need to interact with that data to a hundred percent do that and only start to put controls in place when their behaviour changes.

Adam Myers

Yeah, not always. I might do something just I’m going about my day, I’m not sure. I’m not trying to put the business at risk, but it might just be I’m exposing the business to leaking of data without really knowing about it as well. So it could be a large database that I’m not sure I have access to, like you mentioned, and I’m just not aware that I’m actually sharing that externally, for example, and errors maybe it kind of steps in which I think is quite clever that it does that as well.

Stuart Wilson

Yeah, in cyber, we’ve always spoken about strength in depth, different tiers of security, and when it comes to data, it’s exactly the same approach. We’re going from maybe prompting a user to advise them that what they’re doing is maybe not in the best interests, typical user education, and then through to the AI capability of classifying the data on their behalf and through to DLP protection where we can actually say, no, that’s not a good idea. We’re not going to permit that to leave the organisation.

Adam Myers

Oh, amazing. I was just going into our third topic around threat prevention. So how is AI used in real time to prevent data loss and secure access?

Stuart Wilson

Yeah, again, a combination of the points we’ve been talking about. So threat prevention in real time. People will look at closing the front door if you like. So firewalling technology that’s going to be integrating AI technology, I’m sure. Things like web security, again, doing that analysis around what the content is for a website, what attachments might be coming or downloads might be coming into an organisation through that front door. So I think AI has got huge benefits to add in that respect, but strength in depth, that was the conversation we were just having. So close our front door or shore up that front door, but also it would be remiss not to focus on the actual data itself. So again, back to that classification starting point. Consistency I think is one crucial thing that people need to focus on when it comes to protecting their data as well. So front door is the one approach, but then the data, when you talk about exfiltration of data or loss of data, it’s got to be the same level of protection at every possible exfiltration point, and it’s got to be the same user experience as well because otherwise, I’m sure everybody listening will know, a user will try and find a way around something and if they’re permitted to do it, then it must be okay. It can’t be risky to the organisation. So they’re kind of crucial. That consistency in DLP policy, how it’s applied and the user experience, that’s what I think people really need to start getting on board with because it will, from an admin perspective, an overhead perspective, it’ll mean that people’s day-to-day activities can be reduced massively in terms of looking into all those different things that are going on because you can trust that consistent in results.

Adam Myers

Very good. And just quickly, just a topic I did want to mention coming from, I guess I’ve come from an ICT background, historically, it was around network security. So we’re seeing big advancements in AI – I know that a lot of customers over the last 10 years probably have moved from MPLS to SD One on that journey. So where is AI heading in terms of network security this bit in terms of maybe what you’re seeing for those customers that are maybe making that migration?

Stuart Wilson

So network security, I guess the possibilities are somewhat endless in terms of going for this position of sensible decisions around traffic routing and which links to use versus making AI based decisions around more traffic to use. So maybe more predictive capabilities. I think we’ll probably see coming into solutions more of an understanding around, well, at this time of day this type of traffic will increase, so let’s make some different type of routing decisions or security decisions based on what we know is going to happen across that typical environment. So I think again, streamlining more efficient and making smarter decisions quicker, I think that’s some of the key benefits that AI is going to introduce.

Adam Myers

And in terms of that network deployment, we’ve done one for a very large customer I looked after for a number of years, over 10 years, it was quite a seamless deployment. I didn’t see this big challenge around the deployment of moving to new technology. It felt quite seamless. Would you say, from a sales engineering perspective, you see that?

Stuart Wilson

So for those who don’t know Forcepoint in that network security space, it’s kind of one of our USP’s. So the management is kind of well-respected and lots of government defence type organisations utilise a Forcepoint firewall in technology for that exact reason. The management’s very simple, but also we have something called zero touch deployment. So pre-configuration is set up for the device and when the device is shipped to a location, it’s just plugged in, it makes a call home and then it retrieves a configuration. So zero touch has never been more accurate in that respect. For people who are operating lots of small locations, again in the military forward operating basis, that kind of thing, it means that they don’t need that kind of vast technical capability on site for somebody to plum in a device and do the initial configuration, you can just make that call home.

Adam Myers

And we’ve got a few case studies around that. I think it’s an interesting point because in terms of a project delivery, a lot of people don’t make these moves, probably because it’s like “well – have we got the time to resource?” But you mentioned things like satellite offices or smaller sites where you maybe don’t have as many hands-on when you’ve deliver it. I think that generally was our experience. It felt like a very seamless transition. So I think that’s a big step in technology.

Stuart Wilson

It is a good point. I think lots of people typically only move away from their existing network security stack if they’ve got a problem where they’ve had a bad experience and actually people should be looking to think, well, can we run more efficiently and more effectively, or can we achieve a better business outcome by moving to a newer technology way or something that offers us something a little bit different?

Adam Myers

Yeah, amazing. In terms of the fourth topic around compliance and regulations and how AI is bouncing, we actually work very closely with the Northern Information Governance Centre. We’ve done a lot of talks. Our live hack has been really good in terms of getting their take on things. And a snapshot I kind of got from that was a little bit around regulations around ISO 42001. I think a lot of organisations have made move for ISO 27001. How is AI helping businesses stay compliant with data protection laws and especially around ISO 42001, what are you seeing there around that?

Stuart Wilson

Yeah, both of those that you’ve mentioned, and ISO 270001 online is probably the one that I’ve have more direct experience with, but we’ve seen a big uptick in people wanting to understand the DLP capabilities because of the changes to, ISO 27001. So a need to protect data. And then you’ve got things coming in DORA, NIS2, for example.

Adam Myers

Yes.

Stuart Wilson

Again, they’re not being explicit in terms of the actual type of protection, but they’re offering guidance. And if you read into that guidance, it is strongly suggesting that DLP or data loss or data security technology is what’s required for organisations and they’ve got to show that they are doing something. It is no good to kind of bury your head in the sand and assume that you’re not going to have some kind of incident. Just look recently to where we are today to large organisations in the UK High Street well-known logos, both suffering immeasurable damage right now financially, reputationally because of an incident that they’ve had.

Adam Myers

Yeah, amazing. No, it is very interesting. I think we’re going to see more of that, I think and how AI is regulated. I think a lot of people are concerned around it in terms of how they introduce it to their business. And I think what Forcepoint probably have as a unique point there is that you can kind help with that journey because we’re introducing into our business constantly and there’s new versions that coming up continuously. It’s how you regulate that and just introduce it to your business, which I think is something I think you can definitely help with.

Stuart Wilson

100%. Right. And you mentioned AI regulation there. I mean the EU AI Act last year, the year before first, first kind to took to come into effect, literally declaration, I think it was in the UK, similar timeframe, both of which are focused on safe development and safe use of AI. Again for a consumable technology, everyone jumps on ChatGPT for an example. Isn’t it great, it can automate this, it can be super quick, it can change this for me. The bit that people don’t necessarily understand quite so much is that AI are only as good as the data that they’ve been trained on and the way that they’ve been written to consume or use that data. So what do I mean by that? Well, a traditional large language model of ai, let’s just say that you have two arguments, right? One argument that says that the blue side of the city is the best, and the other argument says the red side of the city is the best. If you only speak to one half of the city and get your data from that one half of the city, well when you ask that AI model, which is the best side, it’s going to come up, which is the one side because your data is inherently biassed at that point. So in those open AI, large language models that are being used, that’s a real concern for people and you’ve got to be very careful around that. The Forcepoint approach, we’re using something called small language models. So we’re using models in the classification that are aware of different types of data themselves and then together they’ll make that compelling decision around how something should be classified and what’s more important. On top of that, the training that happens for the AI in our product is based on your data. So of course it has an understanding of what common data sets look like for organisations, but when it comes to your specific data, well arguably you do want an AI, which is bias because it needs to be focused on what’s most important for you as an organisation.

Adam Myers

Yeah, yeah, definitely the blue side, just coming from Manchester man – just put that out there! So it’s very interesting you say that. I think we’re seeing a lot of that around data’s privacy as well. And going into our last topic. So I guess how can AI both help and hinder data privacy, particularly around risk like misuse and surveillance? Where are you seeing that from a Forcepoint perspective and how that’s developed?

 

 

Stuart Wilson

Yeah, I guess to take it from a theatrical perspective as well, TV series that are out there at the moment depicting how AI can manipulate video in real time or can make changes, etc. We could all do that on our phone. We just have this craze around creating action figures with ChatGPT as well. Again, it just shows the extent of what can be done to manipulate. There’s of course lots of other scenarios where deep faith videos etc, are being created and the risk that poses to individuals, even if it is obviously not accurate or real reputational damage is still done at that point. So look, AI can be a great tool for people’s benefit can also be used on that, not so great side of things. And ironically, to try and combat the not so good side of things, we’re going to be using AI technology to help with that side of things. So on the privacy side, again, think about organisations when things like ChatGPT came along stepping, maybe not what you mean by privacy, but organisations uploading their data into these online language models. There was quite a few in the press where suddenly you’ve got in their facing chatbots that have been trained on their data, that data hasn’t been sanitised or sanctioned, and suddenly internal secrets or internal information is being bled out of an organisation just because somebody thought they were doing something useful.

Adam Myers

And I guess that’s a risk for a lot of organisations that have introduced AI is it’s around you might be just exposing sensitive company information into these sort of models and there’s a big risk there, isn’t there that you, from one a compliance perspective, a risk, the bias, again, what can be done there?

Stuart Wilson

For me, there’s three areas that you look at AI. So there’s shadow AI, that’s a real big concern. Every tool that you are using is introducing say an ai. Every website out there has got AI baked into it. So shadow AI where users are just using it, that’s kind of a traditional web security use case to protect. So make a decision on whether you want to allow or restrict flow sites if you want to allow them. That’s kind of sanctioned AI if you like. So you make a decision to use it. But actually then you might want to start controlling the types of data that can go into that sanctioned AI platform. So that’s where you have a DLP type use case, and then you get into the adopted ai. So that could be chat equity enterprise, could be Gemini, it could be copilot. And that’s where you are actually giving that AI access to your internal data, It’s closed, so it’s not necessarily going sales your organisation, but you’re giving access to everything. If you don’t know what that data is, if you don’t know it’s in the right place, classified in the right way, got the right permissions, then in theory you’re opening that data up to everybody in your organisation. And again, lots of stories around showing you all their HR payroll data, paywise information, people who shouldn’t see that information have gained access to it. So again, that’s where DSPM classification, that really comes into the mix because it’s helping people adopt these brilliant technologies. They do have a purpose. I think some organisations need to understand how it can actually improve their business, what’s the business value to them, but there’s no doubt that there is benefit to them.

Adam Myers

Yeah, we’re all trying to, I think, run at a fast pace. We we’re trying to move quicker as humans and if we want to get from A to B and I think we’re just maybe not realising, maybe take a step back and realise potentially that risk that is posed there. Just you mentioned about DeepFakes, I think they’re getting more and more real. They look now, it’s quite difficult to spot. And I guess how can people maybe, is there anything you can, any tips at all that you maybe help with people around DeepFakes and now we can spot them? I know there’s some things around and if you don’t see hands and is there anything there that you can maybe just want tips for people who maybe not been exposed to too many?

Stuart Wilson

Yeah, crikey! Just look for realism and if there are things that don’t kind of look right, hands and fingers, those were the sorts of things, the dead giveaways in some of the early adaptations of images. I mean just be careful with your likeness, right? But I think it’s just all too easy for that to be out there. And unfortunately for people to take advantage of it.

 

Adam Myers

Yeah, especially the voice side of things, that’s really a bad, it’s one that comes from a telephony background as well. I dunno, it’s very difficult to spot these things now, especially over the voiceover. So very interesting. So thank you so much for joining us. That was really interesting Stuart. So thank you so much. You’ll actually be at SecureTour, presenting a little bit around DRA, so is there a little bit you can maybe expand on that?

Stuart Wilson

Yeah, absolutely. So we’ll be covering AI in a bit more detail. And then DRA, that’s a Data Risk Assessment. So this is launching in a couple of weeks. Yeah, access via the Forcepoint website. Just go through and register and then you’ll get inducted into automated process DRA data risk assessment. So this is a five minute process to go through and configure our technology to start looking at your data as a customer or prospect and to start within minutes building up that perspective around your posture and your risk to the organisation based on the data that you’ve given access to. So it is going to be really good. It’s a real insight into the DSPM technology and the capabilities of DSPM. I think it’s going to be really interesting for people to see.

Adam Myers

Amazing. And I’ll actually be presenting the live hack around AI security as well. So stay tuned. We’ve done that quite a bit across the country. And we’ve got four locations, so Belfast and Edinburgh, Manchester and Newmarket. So please join us, check out our website. There’s lots of dates available for you to join us.

Thanks for watching this episode of Tales from the CyberLab. Please join us at SecureTour, but there’ll be a brand new episode in the next month.

But for now, Stay Secure