Video: Optimizing Your Insider Risk Program with Cyberhaven | Duration: 2480s | Summary: Optimizing Your Insider Risk Program with Cyberhaven | Chapters: Webinar Introduction (5.36s), Introducing Insider Risk (214.33s), Insider Risk Interface (326.62s), Risk Score Analysis (785.735s), Leveraging Insider Risk (1031.92s), Investigating Insider Risks (1347.3s), Risk Group Management (1736.705s), Q&A and Conclusion (1920.36s), Insider Risk Intelligence (1999.515s), IDIS Service Overview (2374.195s), Webinar Conclusion and Outlook (2430.275s)
Transcript for "Optimizing Your Insider Risk Program with Cyberhaven": Hey, everybody. Thank you for joining, for this webinar on optimizing your insider risk with Cyberhaven. I see more folks joining in, so let's give everybody a couple more minutes so we we have the audience to get this started. Super excited for this because insider risk is a bit pretty big part of data security. Right? So let's give everybody couple of minutes. In the meantime, let me go over some housekeeping rules. Right? You can see the q and a option in the webinar. So if you have any questions as we are going through the content, please make sure to ask your questions in the q and a section. We have a team backstage that's looking at all your questions, making sure they get answered, and you will see those responses pop up as well. Right? And for q and a, you can use the one, that is on the left hand sidebar, q and a, and put in your questions there. Once we respond, we should be able to see the responses across the board as well. With that, let's give it a few more minutes. We will kick off maybe in the next three minutes. Looks like we have, enough attendees that have joined, so let's kick this off. Right? With the way data is consumed, distributed today, insider risk for all of us is a very, very big part of data security. Right? Now when you think of traditional insider risk, it's it's a lot more than around data. Right? It's about you, it's about behavior. It's about a whole bunch of other signals. But we're focusing in on the data related events for insider security insider risk, really helps is to be able to add more, clarity to those signals. Any kind of movement of data and touching data gives you the intent behind those insider risk signals. Right? And that's kind of what CyberHaven is good at. That's what CyberHaven can help you with. A lot of our customers use CyberHaven as a a noise filter, if you will, on their existing, Uber based inside the risk systems. So with that, I would love to introduce Josh Hills, who will be walking you through how you can set up and navigate insider risk as it pertains to data security in Cyberhaven. Joshua is a part of our analyst services team. He has been helping a lot of our pretty large customers, think Fortune hundreds, set up and operationalize their insider risk program. Joshua, all yours. Hey, everybody. Nice to meet you. Like Megan has said, I'm Joshua Hillis. I'm a DDR analyst over here at Cyberhaven. I spend the majority of my time doing threat hunting and policy and dataset tuning as part of the ProServe team for my clients. And today, we're here to talk about insider risk, what it is, how CyberHaven addresses it as an issue, and how to utilize the existing features, not just for in-depth investigations into potentially risky users or risky data flows, but how to build up a insider risk program that automatically moves people towards enforcement postures whenever they need to be moved towards enforcement postures and how to optimize your, insider risk program further from that. At a high level, of course, insider risk is the risk that people inside your organization, whether that be employees, contractors, executives, etcetera, will misuse or mishandle sensitive data, whether that's by accident or on purpose. CyberHaven's insider risk is gonna turn your existing datasets and policies into a, like, a ranked user centric view that'll show who's the riskiest, what data and behaviors are driving that risk, and where you should focus first as far as policy tuning investigations and more. So with all that being said, let me go ahead and share my screen. So here's the insider risk page, the default UI you're gonna get. At the very top, you'll see, like, a long list of the different insider risk groups that have been created. There are two types of insider risk groups. There are dynamic ones, and there are manual ones. So oftentimes for things like a watch list where, you know, you have a user that maybe had a DLP incident in the past through negligence or something else otherwise, you can add them to this insider risk group here, and that'll categorize them as such. We have other example ones here. If we look here at this user, Steven, we can see that he's a part of sales department. Specifically, he's a sales engineer. He's on a performance plan. He's associated with project ABC, and, of course, he's on the watch list. And on top of all that, he's a departing employee. Departing employees is a good example of one of those dynamic risk groups I had mentioned. What we're looking for here besides a hardcoded list of users is any instance where the employee termination date exists. So this means that with your directory integration setup, if there's ever a user that has an employee termination debt date that's set, they will automatically be added into this risk group. So any anything that you do with this risk group, it will be automatically updated, and those users' risk scores will start to be adjusted accordingly. That being said, beneath that, we have a long list of users here sorted by risk score. We can see Steven's a clear front runner with a risk score of 55,000. And we go down, we have about 19,000. Cole, Sitt, and Gritty at about 8,000. To understand what we're seeing here, we need to understand how risk score is, calculated and how it works as a foundation for this, inside a risk program. So what risk scores are, simply put, each event that matches on a policy is gonna have a sort of calculation that's run. It's gonna take your dataset sensitivity value. It's gonna take the policy severity value and the user risk multiplier and use that to assign a risk score to the event, and that'll start to add up over time. And so what that means is if we were to come into Insider or RiskOverview real quick and look at it, If we have somebody interacting with corporate cloud storage, specifically SharePoint here, and it's going to IT admin utilities, this is a high sensitivity dataset going to a critical severity policy violation. So the risk score associated with this is naturally going to be higher. Where insider risk comes into this is if, let's say, for example, Steven was the one doing that activity, if we remember here in the departing employees group, we could see that the termination date is set, and the risk multiplier down here is set to four, whereas the default is one. And so this is being deemed four times as likely on top of the existing severity that we got from the sensitivity being applied to the, severity of the policy violation there. And so what this automatically does is in insider risk, whenever you're looking at these, instances, we know contextually that this user is on a watch list, that this user is set to depart soon. And on top of that, you know, we have a rough idea of what kind of data they should be able to access as part of a sales engineering team. And so, naturally, everything that they do is gonna get bumped up further up here so that way you can see your higher risk users and what got them there in the first place. If we expand this really quick, we get another part of the UI. This will change as you click on separate users. You can see a handful of things here. You can see their daily risk score trend. You can look for spikes in activity, especially for things like departing or for instances where you have things like departing users happening. You know, if you see sudden spikes of activity happening relatively recently, that could be cause of concern. Coming over from the risk summary, you can go to all user details and see the groups they belong to from the directory integration. You can see their title, sales, and manager email here. You get their host name, local user groups, and then their local usernames as well. History. This is where you'll see if anybody's interacted with the risk score or cleared up the user here. So if I were to go over to this drop down really quick, you can clear the risk score. That's the sort of thing that'll pop up in history. Circling back a little real quick, though. If we scroll a little bit further down in the risk summary, beneath the daily risk score trend, we can see the risky data flows and get an idea of, at a very high level, what's happening with this user. We see there's lots of source code going to IT admin utilities. We see lots of source code going over to unapproved source code repositories. If we click on this, it'll take us straight over to, the risk overview page with the event data. If we look at the existing filters, it's filtering for the dataset source code and the policy unapproved source code repositories and the user Steven. And here, we can see all of that activity. If you wanted to use this as another way to investigate a specific user, like, let's say, for example, you log on, and Steven, you had you don't remember seeing him in the list before. If you click on actions, there's a handful of things you can do. If you click on see events, it'll take you to risk overview much like the other one did. However, will there be no filters applied except for the user. You'll be able to see all the activity that they're doing that's contributing to their risk score. If we come here, we can go and see the incidents associated with this. This will apply the filter for the user specifically. And from there, we can see all the incidents being generated if we want a separate view from risk overview. If we click here on actions, we can also add and remove them to manual risk groups. So if I click this and I wanted to maybe add him to a list of risky users separate from flight flight from the watch list, you could do that and apply them there pretty simply. And, of course, you can clear the risk score. And if you use this one here, the user risk report, this will take you to the visual analytics page, and it'll show you this user's policy violations and dataset activity at a very high level. So you can see here the same risk score that was being reflected in inside our risk with the same general trend lines. If we scroll down here, we can see the number of files being copy pasted from or uploaded resulting in policy violations, the number of unique destinations data has been exfiltrated to, number of datasets exfiltrated, the number of exfiltration attempts blocked by CyberHaven. And beneath that, you can see the user policy violations severity by day. This is generally gonna mirror pretty closely the risk score, but you get a very detailed breakdown of what's happening and driving that risk. And so if we look here, we can see that they've got 52 low, low severity policy violations, 51 that are critical. Whereas if we go look at their more recent activity, we see a lot of medium, a lot of informational ones, and a lot of high ones. Scrolling down a little bit further, we can see the policy violations by severity. You know? This guy is our riskiest user in the tenant at the moment, and so, of course, the majority of his stuff is gonna be critical. You can see the policy violations by severity. And, specifically, you can see every combination of policy severity and dataset sensitivity. So although this user isn't is, like, isn't just interacting with critical or with, like, a high severity policy violations, they're doing it with critical data pretty often. So that's the sort of thing that risk scores will take into account and, like, still drive that evaluation. User policy violations here, you can see a breakdown of all the policy violations specifically. And then over here, you get a more detailed table with the policy severity, the dataset involved, and the sensitivity and the count of unique events. Destinations is gonna be different from your policies. This is gonna be, like, the destinations that make up those policies. And so if we're looking at this, we're not necessarily seeing, like, external source code repository, but we're seeing Git and GitHub as destination location outlines. So this helps you figure out, like, what specifically they're doing, especially in broader policies like cloud storage. This will help you sort through find where the data is actually going. And then datasets used in policy violations, this will help you understand what what kind of data you've just been interacting with. Beneath this, you get a Sankey diagram looking at critical or high sensitivity data flows. So we can see here that source code, of course, makes up the bulk of their critical or highly sensitive data flows. But beneath that corporate cloud storage with SharePoint is a close second. You can go over individual parts and see where these, policy violations are going, and then you have a table at the bottom with all the details from all those separate dashboards and, components built up there. So that was a lot, but I wanted to bring that up because understanding how the risk score is calculated will help you understand how to best leverage it in the environment. Just for example, you know, like, Steven, we saw him interacting a lot with source code. You know, if he if he had 50 violations with source code going to unapproved source code repositories and another user had 50 events, but they didn't involve source code. Let's say they involved, like, lower severity engineering data and a lower severity policy, the number of events isn't going to come into play as much, and StevenNet will still be regarded as a high risk user there or the higher risk user there. So irrespective of quantity of events or not irrespective of, but, you know, weighting each event by how sensitive the data is, how serious the role it rule is, and how and who the user is is where most of the value is gonna be here. Now in order to get this to work for us, we need to make sure that we're getting the building blocks right. Like I mentioned, the risk score is calculated as a function of the dataset sensitivity, the policy severity, and then, of course, the risk score with the user. And so you need to tune your inputs so that the scores reflect real business risk rather than just volume. What that means is if you have anything that's being, sensitive, right, you need to make sure that it's not marked as unrestricted. Because if it's marked as unrestricted, the risk score for the, event is very likely to be zero or very low. Another thing is if you have policies that maybe have expected activity wrapped up into them and they're more of, a logging posture, you don't wanna have that marked as anything but informational because that'll be marking expected activity as risky, and it'll start to pollute those, the risk work for those users while they're doing expected job functions. Policy severity and incidence is gonna be big, and then user risk groups is gonna be the biggest one of the biggest parts about adjusting, insider risk as far as, like, fine tuning it goes. Like we saw here, we can adjust the risk multiplier to four, which means that every event that a user in this, user risk group violates is going to be four times as deemed four times as risky. And so we're gonna start to see the riskiest users populate up there. Generally speaking, you can also filter out users from insider risk by leveraging the risk multiplier. If you notice a lot of service accounts in here, for example, you can add them to a risk group, drop that down to zero, and then suddenly all the things that they're doing, even if they're matching them, like, you know, certain policies, it won't be deemed as risky in there. You'll see only, like, live user activity up here. That being said, it's important before we talk about how to leverage insider risk for policy enforcement and operational stuff that you know that insider risk is functioning as an overlay. If datasets define what the data is and policy defines what behavior is and isn't allowed, Insider risk should define who is subject to which posture and when. So that being said, I've got a couple of examples that we can talk about to sort of look at that. One of the most common use cases I see is with departing employees specifically. You have employee this risk group is gonna get populated whenever the termination date exists. That's the only thing we're looking for. And so with that, we can come to insight or we can come to risk overview, and we can think of a handful of things that we want to stop a departing user from being able to do. So the first one that would jump to mind for me would be flows to USB devices. I would wanna make sure that any departing user or someone who's getting ready to leave soon won't be able to exfiltrate a bunch of data to a USB device and take it with them. And so really quickly here, we can go ahead and restrict this to departing employees. And what that'll do is this policy that's already set up with departing employees in mind and in block, it's already good to go. And we can just leave this in the background, have a separate monitor one for, you know, employees that don't belong to this risk group, etcetera. And what this means operationally is that once a user is set to depart, once that field is, populated, you can automatically move them into a heightened security posture to control what data, how they're able to interact with data and, policies. So beyond just USB devices, you can do this with other policies as well. Of course, anything you can scope with the insider risk. But let's say, for example, you're particularly concerned about personal cloud storage usage with departing employees, that's one that you can have set up essentially waiting on anybody who's set to leave the company without any manual configuration from your end. Beyond that, one thing I wanna call out is we have some risk groups in here that are, you know, like, department or admin groups, sales engineering groups. One thing that you wanna generally avoid is trying to use risks overview to create too many, like, department specific policies. Although you can do that, It's not an optimal way to approach these things if you have, like, unsanctioned cloud storage for your sales group versus your engineers, etcetera. You only want to use this to affect how things are being enforced, and who's subject to that enforcement. So, you know, earlier, I mentioned moving them over to moving the USB over to a block policy. You could have a monitoring and a and a block policy, but those shouldn't be, like, department specific in most cases just because you don't wanna you wanna avoid forking policies constantly. So there's a handful of other examples that we can cover here, specifically for people under a watch list or people who have done previously, like, risky behavior. You don't necessarily have to block that activity. You can also use the same logic to switch them over into a warning posture without having to worry too much about manually configuring that outside of the risk group. You can also customize the response messages, though we're scoping this to you can scope this to, let's say, our flight risk, for example. We don't want our flight risks doing too much, in regards to unsanctioned chat applications. We can quickly restrict them there. And another thing that we've had success or that I've had success with in the past is policies like this, logging downloads to endpoint. This isn't inherently risky behavior, and as such, the policy is set to informational. However, if we have users that are a flight risk and we don't have content capture enabled for things like this already, one thing that we can do is we can use this for increased telemetry on those specific users. So let's say for you know, there's gonna be a lot of activity here. You don't wanna get too much of that in your s three bucket or wherever content capture is going. But with these subset of users, you know, due to them being under, like, increased scrutiny, it can be good to increase the telemetry available to you by enabling screenshots and content capture for the relevant users. This will help you drive investigations and further inform, like, you know, your investigations with any, like, flight risk users or particularly risky people. Looking at some separate users here for examples, I guess, if we go over to the events involved, we can start to get a high level overview of what's going on with this particular user outside of insider risk and work towards, like, enforcement for the user with that in mind. We know for a fact that this user is belonging to the sales engineering group. If we wanted to see what the sales engineering group was up to in particular compared to the rest of the users that are in here, we can filter down the inside our risk group, and we can start to investigate individual flows here. I'm noticing across the board with these users, I'm seeing despite sales engineering, you know, maybe not needing access to source code directly for, like, their job function too much, we're seeing a lot of things going over the IT admin utilities, and we're seeing a lot of interaction with source code in general. So looking at this and coming through here, we can start to get a high level overview of what's going on with these users and inform policy decisions with that in mind, with the scoping that we talked about earlier. And so if we're concerned about flows to IT admin utilities, we could switch this to a blocking posture and leave this for everybody as it exists, or for the sake of this example, scope it to sales engineering and create a separate policy that only monitors and it looks at your IT your IT users. If we have, like, an admin group, we can scope that and make that policy informational because that is just a logging in or that is just a logging event versus everybody else is going to be you know, that's a that's a pretty high severity thing. Why is somebody trying to get unauthorized access to IT admin utilities? With this sort of logic in mind, can set up insider risk in a way where your departing employees are immediately being enforced on, and you don't need to worry too much about it. You can also set it up to where your user your list here will be populated with the riskiest users riskiest activities. And you can also look at continuous high risk cohorts without it polluting your insider risk page too much. Like, yes, admins that are logging into IT admin utilities have the capability to do a lot of harm, but it's also expected activity. And you don't want that making them too high up on the list for doing expected job functions. And so with tuning like that being done, you can look at, you can look at only, like, the anomalous risky activity that's consistently happening across the board. As far as, like, measuring and maturing the program as you chug along with it, what you're gonna wanna be looking for operationally is things that are getting flagged as high severity or highly risky activity that is actually expected. So, of course, this user interacting with IT admin utilities is not necessarily that's not supposed to be happening pretty much ever. But if we go to their events here and we come back and we filter on a policy specifically looking at, like, unsanctioned webmail, they're a sales engineer. They're probably they're probably sending emails out to different different people, like, as part of pitches or something like that. You can start to look through here in the centralized view and get an idea of how to start tuning that out with the methods that we had talked about earlier. Like I had mentioned, you don't wanna fork these by department too much. However, it doesn't necessarily mean that it's not a good idea to have your different departments in here and inside of risk because if nothing else, that gives you extra user context and things to sort on and look through. Generally speaking, I would try and make it to where insider risk groups can reasonably define scope and, like, end access. And so if you have a bunch of different kinds of salespeople that have access to different kinds of data or demo environments, you're gonna wanna break that up into levels of access or scope it to where, like, it's not just so much based on title, if that makes sense. But, yeah, all in all, in short, Insider Risk is gonna let you use the data and detections that you have already to rank users by risk, investigate them in context, and run targeted playbooks for your most important insider risk scenarios without having to set up too much in the way of, like, policies to triage activity as it's happening. As far as other, like, use cases I've seen that are very, like, good for insider risk, finding remote users and setting them to their own group is a good way to do that or is a good group to have. Oftentimes, you're just gonna see a lot more or a lot of different kinds of activity, and, you know, remote users shouldn't be interacting with USB devices as much, etcetera. Of course, lots of it's gonna be individualized to your organization, but with all that being said, I guess, check you at q and a. That is just about everything that I had prepared as far as operationalizing insider risk. I'm happy to field any questions, start working through the q and a. Can a certain sequence of events trigger users to be automatically added to a high risk group without an annual analyst manually adding them? Unfortunately not. As far as ways to spin up a dynamic risk group, we can look at the things that are available to us here, and there isn't anything it doesn't have, like, anything that looks at, like, a amalgamation of incidents being piled up, and puts them into a high risk group, that's it's only gonna hook up to your directory integrations and, things like job title, region, etcetera. All that being said, thresholding is something that we've been looking at for a while. But yeah. Once you select the departing employees group, the policy changes will apply only to those departing users. No other users or groups will be affected. Right? Yes. If I'm understanding your correct question correctly, I come in here and I scope this to departing employees and I save that, then anyone who isn't a departing employee isn't going to be matching on this policy anymore. This is looking specifically at that subset of users. Does the insider risk page, show a change in the score? Seems to me it would be really eve really helpful to have an at a glance number that shows this person's score increased by 100 k in the past time period. Right now, it seems like I just have to remember if someone's on the list before or not. Unfortunately not. The closest you get to that is with the risk score trend. However, the risk score trend does, by default, drop back over ninety days. And so, generally speaking, you'll be seeing spikes in activity there. Yeah. I talked a lot about, sample case sorry. The question was, can you share sample use cases why you would have a higher slash lesser risk multiplier for some groups? I covered that earlier, but just to cover, like, the top examples, if we had a lot of service accounts that are popping up in this list, generally speaking, that's automated behavior that isn't necessarily going to be driving much in the way of insider risk, at least as far as and all that goes. So if you have those service accounts in a insider risk group, you can drop the risk multiplier to zero, and that'll keep them out of the top of the list inherently. We can see here system is very low. You wouldn't want system doing much of, like, expected stuff and popping up at the very top. Versus if somebody's on a watch list for having previously mishandled deal data, then you could put them into a risk score or a risk group with a higher score, and that'll just have them pop up here more often. One thing that I've seen before that actually ended up working out quite well is in the incidents, if people were noticing that some users were giving very poor justifications in the pop up dialogue for policy violations, They have a risk group for, like, poor explain explanation users. And that gives you an idea of just, like, a like, a that gives you a subset of, like, unruly users that don't really seem to care for the platform as much and gives you something to focus on for investigations. Are there any features of insider risk that are limited or not available if admins use a manual grouping ins instead of a dynamic group? No. Once you have the risk groups set up, you can use the risk groups in all the way that I was talking about as far as scoping policies and adjusting risk scores goes. They're they're they're functioning the same once they're created. The only thing about manual risk groups is that there's no way to, like, automatically update them versus dynamic. Because they're dynamic, you know, they're looking at different fields. It'll automatically update with things like the termination date populating. Can user roles be restricted so that only certain team members can have access to the section? Yes. Like most sections of the console, you can control who can access what through user roles. Cool. And I saw one question in here that was or in the chat, rather, at the very top, with someone asking, is it just me? Doubt it. Question. But, yeah, that's just about everything I had prepped for the prep for today. And, Lou, have any other questions? I'd be more than happy to kick it back over to Meagherna. Thank you, Josh. Thank you so much for, walking us through insider risk as it pertains to data security and, what we can get to do to get that started inside The Haven. I saw a lot of engagement and questions from our attendees. Thank you for that. If you have further feedback, further questions, please feel free to, email your CSM, and we can make sure those get answered as well. Right? Now I kind of wanted to talk about something that's very exciting for us and what it brings as a part of adding to your insider risk program. Let me get started with sharing some slides that I have. Give me a second here. There we go. So what we kind of realized as we walk through insider risk, especially as it pertains to data security, is that we we actually do not have a standard insider risk framework like we do for some of the other aspects of cybersecurity. Right? If you think of EDR or if you think of any other aspects, you do have frameworks like NIST, and that's something that's missing in insider risk. So if you look at our current CyberHaven services offerings here, right, we all of you are familiar and some of you actively use our analyst services where we have our analysts come and work with you as a part of the extension of your team and help make sure the CyberHaven environment is up to date, help you with threat hunting, and all that very useful and critical stuff. What we wanted to do was kind of add a programmatic layer and an intelligence layer on top of the analyst services that we're providing. We call it the insider risk intelligence service. Right? And what this kind of aligns towards is making sure your insider risk program is mature, it is going in the right direction and also giving you a way to stay ahead of what's happening in the space. We all know, especially with respect to AI, the landscape is rapidly changing. New threat vectors are evolving as it comes to how, when, and where data is shared and even by whom. Right? So we wanted to be able to offer that intelligence layer on top of it, and, that's how Iris was born as a service offering. Now what does Iris provide? First of all, what we want to do is leverage the kind of telemetry and insights we have from our own customer base. Right? Hey. What are we seeing in terms of new threat vectors? What is the popular flavor that's emerging as x four vector for insider threat? What are some of the newer patterns we are seeing? Package that into what we call update packs that are provided to you if you subscribe the service. So now whether this is a today problem in your environment or not, you can be assured that you're covered. Right? Because we're doing that research for you. We we are taking all the top trends, codifying it into a sales into a cyberhaven policy, and making that available for you to install and fine tune. We are also gonna be actively looking at insider threat patterns. Now you will see a few words here, like threat actor DNA. Essentially, what we are doing here is seeing how we can classify people into different insider risk factors. And I'll touch upon this in the next slide in a bit. We also want to provide as a part of this service quarterly maturity assessments where we are bringing in consultants to assess your current insider risk program, bring up a score areas of improvement, what needs to be done over the next quarter, a couple of quarters to help move the needle so that the analysts you're working with can pick it up from there and help operationalize that assessment and feedback. And, of course, all of this is backed by our CyberHaven Labs team that's heavily involved in researching new emerging threats with respect to AI for data security. Right? So we are able to interface with our team, get the latest and the greatest, roll that into those update packs as well and offer that to you. A good example of this would be we all know CloudBoard. Right? Because the a bit of a concern for most of us here as it gained popularity. So in terms of visibility, in terms of controls, how do you identify, for example, if if CloudBot is running, what is the risk factors? All of that would come out from our labs team and would be codified into these update packs that you would have access to. Right? Now let me double click a little into the threat actor DNA part. This is something unique that we'll be offering as a part of the Iris service. Right? If you look at insider risk and the kind of patterns that you can see, you'll see different kind of threat actors emerging. Right? Somebody who's a flight risk, what what are their data footprints and signals? What are they doing? And do I see that in my environment? What about somebody who just adopts every technology that is out there to make their job easier but opens up a lot of threat vectors for you? That's the maverick. Right? Lot of these threat actor DNA is one of the things we will be offering as a part of the services to go through all the users in the CyberHaven environment and provide a report on the likelihood that they align to one of these DNA types. So that becomes something actionable for you to take in, ingest, and incorporate better real safety guards policies on your end or work with the analyst side, the human analyst service to do that for you. Right? With this, you will see the, hey. Are you interested in IDIS? Do you want to learn more about what it entails? Button right there on the top right corner. If you're interested, just go ahead and click that and fill that out, and your CSM will be in touch with you providing you more details. If you have an insider risk program that is nascent, if you have an insider risk program and you wanna make sure that your data security fee piece fits well with that overall risk program, this would be a great option for you. If you're just worried about, am I even covering what I need to cover? What about the latest trends? How do I stay ahead of all the risks that are developing so quickly, this service would be the option for you. So if you're interested, please go ahead and click interested in Iris, and we will get in touch with you. Right? With that, thank you so much for attending this webinar on insider risk. I hope you found it useful. And, our next webinar, our next unlock webinar is actually gonna be on data security posture management, which is very, very exciting for us because this is something new that we released recently. It's gotten some great feedback from our customers. It's gotten a lot of traction. And we are very, very excited to share with you what we have with regards to that and the product today, how it will look, and what are some of the things that you can do with it. Right? So thank you so much, and I hope to see all of you next time. Take care.