Video: Cyberhaven Unlocked: Fast Tracking Your Success with Cyberhaven DSPM | Duration: 3702s | Summary: Cyberhaven Unlocked: Fast Tracking Your Success with Cyberhaven DSPM | Chapters: Welcome and Introduction (40.524998s), Data Fragmentation Problem (134.075s), Understanding DSPM (413.135s), DSPM Configuration Setup (662.755s), Custom Label Creation (1617.2749s), Future Roadmap (2380.34s), Closing & Next Steps (3520.6548s)
Transcript for "Cyberhaven Unlocked: Fast Tracking Your Success with Cyberhaven DSPM":
Good morning, good afternoon, and good evening. Welcome to today's installment of the Cyberhaven unlock series. My name is Andrew Boyd. I'm a principal customer success manager. Joining me today are Andrew Gordon, senior technical account manager, and Amy Gurel, director of professional services. In this session, we'll be providing a practical, high level overview of CyberHaven DSPM and how it helps you discover, classify, and protect sensitive data across your environment. We'll walk through the core capabilities of the platform, and then we'll go a little deeper on one of the most important building blocks of an effective deployment, designing and managing data types. In this session, you'll learn how to think about data types in the context of your business, learn more about out of the box types to start with, and how to create custom data types that align with your compliance, IP protection, and insider risk priorities. We'll also share some real world best practices from customer deployments so that you can avoid common pitfalls and get to meaningful coverage faster. So please feel free to leave questions in the q and a section of the chat. Helping us out today with answering those questions throughout the session will be Amit Agarwal from our product management team. And, of course, we'll follow-up via email with any unanswered questions. So data has broken free. AI is changing the world, and you can't do AI without data. Data broke the file perimeter. Data moves in fragments without the file tags, metadata, or other data that help identify patterns. Because DSPM was built to reason about files, the most sensitive data risk is now in fragments that no longer live as files, a file centric model simply can't see or control the real risk. So when the file perimeter breaks, sensitive data leaves its container. Data no longer lives neatly and tagged and labeled files. Instead, fragments and derivatives flow through your organization, copied into documents, spreadsheets, and AI systems, making it harder to control. The old DSM assumption DSPM assumptions fail. Classic DSPM and classification approaches assume you can scan storage, label files, and reason about risk from those labels. Once data is copied into email, Slack, screenshots, AI prompts, bots, etcetera, the fragments evade controls that depend on file tags, labels, and static locations. Most exfiltration is now invisible to file based tools. 80% of data exfiltration involves fragments and snippets, not complete files. So tools that only understand files and file locations miss the majority of real incidents. And can't answer core DSPM questions accurately. If you only see files at rest, you get an incomplete view of where, sensitive data is, who has access to that data, how it has been used, and what the security posture of the data is across the environment, which is exactly what modern DSPM is supposed to provide. One additional challenge that folks are seeing today is that data is multiplying. Derivatives of that data are created and combined and multiply. As data multiplies, DSPM's job gets strictly harder on every access. More copies and derivatives equals more attack surface. Sensitive data no longer lives in a few known stores. It's copied, transformed, summarized, and embedded into workflows across the cloud, SaaS, endpoints, and AI tools, so security teams are struggling to even discover all of it, let alone control it. Fragments lose labels and context. People and AI work in snippets. They copy paste into slides, chat, prompts, etcetera. Those fragments of sensitive data are multiplying and flying under the radar because there's no metadata or labels on the snippets, which means that traditional file and label centric discovery and classification simply never see a huge portion of the real risk. Most risk now lives in these fragments. Our Cyberhaven Labs team data shows that, greater than 80% of exfiltrated data consists of fragments, like parts of strategic plans or acquisition details, moving quietly through browsers, SaaS, and AI workflows without triggering file based controls. And your posture goes stale almost immediately. In a world where data is fragmented, constantly moving, and increasingly entangled with generative and agentic AI workflows, Any DSPM that relies on periodic scans or static inventories will miss newly created copies, shadow data, and derive content. The risk surface is changing faster than the scans. And finally, governance and access drift explode as copy spread into new systems and tenants' access widens and permissions drift. Without continuous context to where DSPM, you can't reliably answer where is our sensitive data, who has access, and where are we exposed. So if you want to do data protection in this new area, you need three core things. A holistic view of the data, so data at motion, at rest, wherever it lives. You also need an autonomous solution that is contextual and predictive. It should mirror how you use data and understand the context, understand normal versus abnormal, and have a deep understanding of context to the point of being predictive, and have autonomous capabilities to take action on that understanding. And then it also needs to be a frictionless experience for end users and admins. So before we dive into what Cyberhaven does, I wanna take a step back and do a quick level set on DSPM. Because depending on who you've talked to, you may have heard a very different definition of what it actually is, and that's not a coincidence. DSPM or data security posture management is still a relatively young category. Gartner coined the term in 2022. And in the March since, the definition has been a moving target. At its most basic level, the high level definition is pretty simple. It shows you what data you have and the risks around it, whether that's structured or unstructured, static data sitting in storage, or databases. But as the market matured, analysts started expanding what DSPM should actually mean. IDCs take push further. They defined it as a tool designed to give you visibility into sensitive data across cloud environments, monitor your security posture, and critically detect risk before they escalate into breaches or compliance violations. That's a more proactive framing. Then Gartner came back in 2025 and refined it again. Their updated definition goes even further. It's not just about finding data and flagging risk. It's about understanding where sensitive data lives, who has access to it, how it's being used, and what the overall security posture of that data looks like. Assess, implement, control, and continuously monitor. That's a meaningly broader mandate than where we started. The takeaway here is that this space is still shaping itself. The goalposts keep moving because the problem keeps getting more complex, more cloud, more AI, more data sprawl. And that's exactly where Cyberhaven comes in. Because we didn't just build another DSPM tool that checks those boxes, we've taken this concept a step further by pairing deep data discovery and posture management with our core DLP and data lineage capabilities, AI security, as well as insider risk management. The result is something more complete than anything the category has produced so far. A single, unified platform that doesn't just tell you where your sensitive data is, but actively protects it across its entire life cycle. This is the CyberHaven Unified AI and and data security platform, and the keyword here is unified. Most organizations today are stitching together four or five different point solutions to try to answer the question, where is my sensitive data, and is it safe? They've got one tool for posture management, another for DLP, something else for insider risk, and now they're scrambling to figure out AI security on top of all of it. And the result is gaps, blind spots, and alert fatigue. Cyberhaven approach is fundamentally different. We brought DSPM, DLP, insider risk management, and AI security together on a single platform, all powered by the same underlying AI engine. That means these capabilities aren't just sitting next to each other. They're actually sharing context. And that's where data lineage becomes a secret weapon. Because our AI engine doesn't just find sensitive data, it understands where that data came from, how it's moved, who's touched it, and where it's going. That depth of understanding is what makes our classifications accurate, our detections meaningful, and our investigations fast. The platform covers your entire environment, SaaS, PaaS, IaaS, on prem, browser endpoints, with both agentless and agent based deployment options, and both policyless and policy based controls. So whether you're just getting started or you have a mature security program, CyberHaven fits where you are. The bottom line, this isn't four products bolted together. It's one platform that sees your data holistically, at rest, in motion, and in use, and gives your team the visibility, context, and control to actually act on it. Alright. So now we're gonna get to the fun part that you all been waiting for. I'm gonna turn things over to Andrew Gorgon, who is going to take us on a deep dive through the DSPM platform. Thank you for the intro, mister Boyd. I can go ahead and share my screen, and we will push forward. Alright. So to begin here, this is the main splash screen that that you you know and love as a customer and and how's our our DDI, visuals. And, I wanted just to start from here and clarify what we do in DDR and what the work that has been done, can is truly gonna benefit from the value DSPM brings. DSPM is bringing a heat map to the puzzle here. So instead of reactively monitoring the traffic, watching the lineage, understanding where our data is flowing, we now have another perspective from the data and where that information is existing today. So not only getting proactive, and blocking some of these controls, but understanding, where some of that egress data is already living, outside its area of control. But to begin, as far as the DSPM journey, we all start at the add connectors under the connectors pane in the product. So listed here, you will see all the active connectors that we have today available. Across the board, SaaS products, infrastructure as a service, on prem. We've got all different types of connectors, and we should continue to see this list bolster substantially as we move forward. As far as how these are connected, they're all a bit different as far as the permissions and makeup. We'll click into that in a moment. But the reality is, essentially, setting up, for example, for Amazon s three or Azure Blobs, we need an IAM role with read access for m three sixty five services like Exchange, SharePoint, OneDrive, we would need add and credentials scoped to read only, and so on and so forth. But the good thing about each of these items, even though, they may have different requirements to set up, is it will walk you right through the process. So we're gonna go ahead and poke into the SharePoint process. I'm gonna you leverage an existing SharePoint one we have, open the configuration, which this will mirror what you experience, in the UI as you go ahead and set up the initial authentication. So we begin at the scoping screen. Very important as we push forward with DSBM. The knee jerk reaction, from a lot of new first time users or or or new DSBM entrances to essentially expand the scope to everything. And the problem with that is it doesn't give you several different items. It doesn't give you the ability to test and tune. It doesn't give you the ability to focus on on priorities and, you know, prioritize your movements to address your findings. And, essentially, it's, possible you may be inundated with data right up front, which is not a position you wanna be in. So today, we have three options that you can move through all sites, obviously, for the more mature state. We've we've tested and tuned. We're comfortable with what we're finding. We've focused on our high risk areas. Now we're ready to expand that scope and visibility. Moving down the line, we have select manually. And in reality, what you would start with is some of your higher priority sites, where you have sensitive data, that you know would exist. And the reason for such is a bit counterintuitive as you might think. Why are we looking in places that we know contain this information? But, essentially, as we test and tune upfront, we want a good dataset to understand how our, internal labels, as as far as identifiers are being found in files, that that they are accurate, and we're we're properly, approaching that information. So starting with known repositories that has a good solid dataset of what we can experience in the environment, helps to allow for that tuning procedure to occur. So within the manual, we have the include or exclude. So different different flavors for different specific use cases, maybe excluding sites that you know have just public marketing information, things of that nature. We don't need to scan it. We don't need the noise. So you've got some granularity there. And then moving to the right, for more specific use cases, if, you need to use regex to start to create a, a list or a more advanced detection of which sites you might want to approach. Moving down the lines. Once we have the scope in included, we then move on to determining, what type of data we want to look at and as far as, two options specifically. The first option you see is forward scans. This is one differentiator we have in the market that does not exist today out in the, DSPM ecosystem. Today, these processes are run on a scheduled basis and sometimes can last ninety days plus before information is rescanned. That is not how our system works. Soon as that file hits the ground in the cloud repository, an API event is generated, and that information is transferred back to the system, to queue it for discovery. So very powerful, keeps that pulse on the regular incoming and changing data as that persists. Below that, we have historical scans. And you'll see it's currently set to ninety days. And I do wanna recommend this ninety day mark or less as you initially start to, test out the application and just go through the tuning process upfront to ensure we have accuracy, and you are getting what you expect out of the system. So beginning with ninety days, that is the recommendation. And as you start to consume that information and you start to action some of that if required, that's when we would come back into this max modified, last modified date, toggle and start to increase that scope. But, again, be careful with this. This can inundate you with data. The the process with this is to start small, monitor, understand, tune it, and then expand from there. And then we move down the line to advanced settings where you have some additional granular options as far as file size, content, and the max size of items you want to, inspect, specific file types if you do or do not want to, go ahead and scan those, and as well as a more manual exclusion if you've got specific paths, you've got anything of the nature that you want to prescribe, whether it be include or exclude, as well as the ability to tie in an evidence capture bucket, so you can copy sensitive information as required into a repo for investigation. Last piece of the puzzle is essentially the review here. Understanding what that scope is you chose, the frequency, and or more so the historical scan settings. How much, data are we gonna look at as far as, the modified time stamp is concerned? And do we want to look at new incoming data as it is either adjusted or added to the system and all of the file type exclusions, paths, things of that nature to help cultivate a smooth transition. Alright. So we will circle back as far as a slide on the connectors to show you kind of what is coming down the pipeline. But we'll do that a little bit later on in the session, and, we'll we'll take q and a from there. So now we've gone through, the the setup of the initial connectors. And, again, to recap, we want to start small, create the authentication paths, start with a minimal scope, and test and tune from there to ensure you're getting the accuracy you require. And as we get on to, later part of the session, I'll talk a little bit more about as to why that is so important with tuning, when it comes down to custom data types or labels, and some of the policies that you will create. Alright. So we have our connectors set up. Once that information is in and you hit that save button, it will queue up that object for scanning. So after that, there's nothing left to do but sit back and wait for the results to come, and to further file through those and understanding if any tuning or reconfiguration might be needed to, better the experience on that side. So moving down the line, a big piece of the puzzle that we have here is our classification engine, and this will be a totally new concept, to our existing customers here. And, essentially, we are leveraging what we call Cyberhaven labels, and this is an intelligent layer that makes everything else in DSPM work. Labels are organized into six categories, each answering a different question about your data. So beginning up here, we have the data types, and this will answer what type of or what kind of content this is. So we've got 44 out of box, labels here covering everything from financial documents, employee records, CAD drawings, code. And the great part about these AI driven data types is you have the ability to test and tune these to your own advantage. So, for example, if we see some of these out of box labels not performing as we would expect, we can go ahead and we can add logic into the algorithm, and test it on the fly, to understand that our efforts were fruitful in adjusting that. So very powerful and definitely a differentiator, in the market space is being able not only to have this AI backbone and classification, but also be able to affect its behavior and detection, without having to request an update to something or something similar in that nature. Then moving down the line, we have what is called data providence. And this answers where did this data originate and who owns it. With categories like internal data, internal, individual, external private, and external public. This is critical for understanding third party data risk. Did this file come from an employee, a customer, external vendor? These are all important pieces of information. And what we do is we leverage these in our policies, to not only identify if it's a sensitive piece of information, to not only identify what the purpose of this file is for, this document is for, but also who who essentially owns this. Where did this information come from? And that's what data provenance helps to answer today. And moving down to data patterns, answering the question of, does this file contain, specific structured sensitive data elements? This is more of your standard understanding, using regex, pattern matching, and a lot of the old algorithms that are available in DDR today and were available in the past. So we're still leveraging some of that pattern matching, and you'll see a lot of these align with the content matching rules in DDR today. But, again, an additional data point, an additional layer, within combination of these other labels, we really start to understand, where this information is coming from, what it is designed for, and what information might be within. Down the line, we're moving to over to data sensitivity. So this essentially is rolling out the types, the patterns, and essentially allowing you to evaluate a risk score. And, essentially, what these can be leveraged from outside of such as aligning to your existing classification scheme and your your organization's understanding of risk. So you'll see here, I hit the edit button on our critical sensitivity label here. We have out of box settings here to give you an example of what good could look like, but you have this fully capable to edit, and to construct to match your business's use case. So fully customizable definitely. And this is a piece of the the deployment, that you're gonna want to get done upfront. Taking a look internally as far as your internal your your classifications, policies, and things of that nature, to reassess the evaluations here and the defaults that are set and right size it to what your organization deems as critical, high, and medium. A a big piece of the puzzle we want to also get, out of the way upfront. And then moving back to the top here, we have what is called data protection scope, and this is where the compliance frameworks come in. You'll see PCI, HIPAA, GDPR. And, essentially, in a similar nature, we have the ability to edit these, and we have a default of what good could look like. But as your organization moves forward, this is all open subject to change as you see fit. Some of these might be a little bit more straightforward. Others, maybe you're creating custom data types where you want to include some additional identifiers that you've created in the system, that you'll be detecting moving forward. So, again, fully customizable. We have that out of box structure, so it doesn't need to be, a manual process by any means. But, again, you have that ability, to essentially toggle and and, adjust this to right size. Alright. So as far as our AI labeling, the big piece of the puzzle that I definitely want to show, and walk through a essentially, a demonstration of how this will be leveraged is the creation of our AI label. So data types is going to be the big one that we are focusing on here. So we'll go ahead and hook this open. And up top, you'll get the AI definition, and all of these out of box labels will have this information. What is this intended in human language, and what does not apply. Right? This is important for as you go to test and tune. If you need to add specific inclusions and exclusions, you have the ability to set grounding as to what to expect from it, and add to it as it as you see fit. We're gonna walk through an actual example here in just a moment to describe this process down below as well as the upload. So I wanna go ahead and set the stage for that. I will go ahead. So what I've done here, I I've recreated a scenario that any organization is likely to go through upfront and and creating. Right? There's a lot of great information here as far as labels across the spectrum from different perspectives to help us identify and and properly, cultivate risk for cultivate risk profiles. But the reality is we might have specific proprietary information. We might have different needs that are not included in the list today. So what I've done here, we'll go ahead and walk through the process of creation a label just to show you how simple this is, to do so. For my use case, essentially, what I determined would would be helpful is, security questionnaires that have a lot of sensitive information. That information is transmitted to partners, to vendors, to customers in some cases. And, inherently, there's a there can be a lot of risk associated. So one thing that I'd like to do in the DSPM system from for my organization is create a label that is going to identify, that information. And to begin, the first thing that I recommend is taking a live sample and quickly kind of show you what I I started with here. I created some dummy content, and I took this dummy content and I fed it into an AI engine. So an AI engine of your choice, hopefully approved. But, essentially, I fed some information in and I said, I want you to essentially create a summary of this information for detection. Keep it broad. I want this to be, of, you know, all ranging, and to capture any type of audit reports that we are leveraging. And that is it. I created a three to six sentence, description. And at that point, I moved forward and leveraged the system we have here. So I'll go ahead, paste that in. So what we have here on the left hand pane is actually a it's internal to the system, but this is AI to help reformat, the summary of information, and essentially speaks the same language as our application. And the reason for such is our application today is currently using cell. So you have the ability to do this in more manual fashion if you so choose. But what this will do is this will take a a real world human language description, and it will go ahead and port that into essentially more programmatic, input that our system will be more easily able to approach. And I'm going to go ahead and first click label should. Add the description, produce an AI, and you'll see, you know, a chat window here, and we should see that information begin to populate off to the right hand side, and you will start to get a understanding of how that will, propagate. So if you look over on the right hand side, this is the naming of this definition line item, and then we have an AI prompt that is created. And this is essentially what we are running off of, to formulate additional testing. You can add as many line items as you need. So as you find, the reason to, to create some additional conditions or create some exclusions, you have the ability to further tailor and tune and test the information. So going ahead, I'll save this label once I give it a definition. And let's just say security questionnaire. And we can go ahead and save. And it looks like it did not like the length of our description here, so let me go ahead and revise. And have this reproduce. Alright. There we go. Goes to the machine here with that first pass, but this is what you should expect moving forward, and this is actually gonna be best practice as you create these moving forward. You'll see the prompts broken up into more individual conditions. And the the thought process here is we're looking at our statements, across the board here, but this is the profile of what we're looking for. And you can see in the description, I had some details there. Right? It had some specific, you know, platform related details, sizing details, but the description crafted here are are simply not so. These are generic. These are open ended, and these are going to be, essentially feeding the definitions. Alright. So we've created that. Now we want to test. And, essentially, that is you can do it from that screen if need be. Always best to save your work and and iterate from there. But I'm gonna go ahead and I'm gonna add the questionnaire, same one that we looked at earlier, and it's pulled up in Slack. And I'm going to or pulled up in the browser, and I'll go ahead and do upload. So what we've done here is we've quick quickly crafted a label to, identify any security questionnaires that come across the organization. And you'll see here on the right hand side, it did detect, but we also have other pieces of information, that were also identified to be possibly related. Right? So not just the label in question that we're looking for, but some additional context that may be helpful, for future iterations. So let's say we've created this, we've tested this, we've thrown a few samples at it, we're comfortable with what it's finding. The next step is to, deploy. We've deployed this for a scan. Oh, look. Hey. There's a false positive. How do we go about, reworking the solution or reworking our definition here, to account for that? And the scenario that I, you know, that I walk through here is essentially what if, in our organization we have a blank questionnaire. That's not critical. That's not risky. Why does that information need to be, essentially identified as such? So how do we exclude that behavior moving forward? And, essentially, what we're going to do here is we're going to tune that noise out and ensure that we are able to alleviate that from detection. So I'm throwing the template in, quickly analyzing just to show you that it's able to assess what's there, and it will detect it and should detect it. And then we wanna tune that noise out and test again once we have a once we have the go ahead. So false positive registered, I'd expect that continue to be a false positive in the environment. Let's go ahead and tune that noise out. So this is the output from our initial, human readable definition that we wrote. And we'll go ahead and hit the label should not. And a little verbiage here, make sure it's clean. And, again, I'm gonna push this definition. And for this definition, I did the very same thing. I took this empty questionnaire file. I loaded it into chat, into Claude, and I had it create a definition of what information should be, contained. So that is all I did. The very same type of prompt, generated that, and put it into our left hand side to convert it to speak Cyberhaven AI language. So you'll see here we've got the name of the definition and we have the prompt itself. And we'll go ahead and save. And let's test that again and see if our efforts were fruitful. So we've gone ahead. We found a false positive. We've gone back into the definition using that same natural language insert. We've created an exclusion, to ensure we're not detecting these empty questionnaires moving forward. So this is this is done to highlight how quick and easy, this is to create a label. Take some content, throw it into some type of AI generator of your choice, get a three to six sentence broad summary of what that content is, implement that in the system here. It will create rule sets, and you can test it live on the fly to understand if your efforts are fruitful, and how you, want to go about, detecting that information. One last piece here, I do want to just quickly show, for any of these any of these, types of data. We also have the ability to capture OCR. So as as simple as just taking a screenshot, I just wanted to clarify that we still have that capability, and we we very much can capture that information as well. So I'll take that in, and I'll take that right back to our test field, And I'll drop it in and validate that we are still seeing partial matches against the content. So very powerful engine to create your own flavor of whatever identifier or label you are looking to construct, test live, iterate from that, throw a few curve balls, ensure you tighten up the detection, and that from that point, move forward. And at the you know, moving down the line, all you have to do is circle back, add an exclusion. But, again, we're not having to write code. We're not having to write regex. We're not having to write cell. We're we're essentially using natural language. We're using some input, which you can easily craft yourself as well, and feeding that intelligence into the application in a quick and seamless manner. Alright. There we go. Partial match. Had half of a a page displayed up in a screenshot of my screen, and we've we're still actively detecting that. Alright. So moving down the line. So now we have we set up our connectors. We've maybe turned the turned the scanning on to some individual data sources with a minimal scope, and we've gone through the labels, understand what is out there and what is available. I even created a few of our own proprietary labels that we find important to the organization. The next piece of puzzle piece of the puzzle here is going to be creating a policy leveraging that information in DSPM. So the policy in the policy in Cyberhaven DSPM answers the question, is this specific type of data in a location where it shouldn't be? Full stop. Each policy monitors for a condition and generates an issue when that condition is violated. Look at the policies here. PCI data outside of sanctioned location. Go ahead and pop that open. So starting up at the the top here, we have a number of options as far as, what's available to poke further here. Enable disable button, a link to the data explorer, which we'll take a look at here in a few moments, as well as a delete button here. But at a high level, what you're seeing, is the, specific makeup of this policy, and all this is available via tool tooltip if you go ahead and click on it. This policy is, is very simple. In particular, we're looking for the type of financial documents, with a pattern of credit card bank and the approved scope for PCI. So this is another piece of the puzzle. As you start to evolve, you'll start to carve out areas where this information and specifically sensitive information can exist and cannot exist. And that's what this policy is looking for. Anything that is found to meet this definition, it it creates a policy or creates a, issue in the system, and that is the actionable source for the team to begin to react to. In addition to setting up numerous definitions, you can also configure actions. And what this allows you to do is create different responses for different types of findings. A response finding or more so detection in s three, compared to a detection in SharePoint very might very well might be treated different. What are the steps to resolving, from encrypting, removing, notifying compliance, what have you? For each of these policies, you can set up a procedure. And as these items are found, you have a, essentially, a triage description of how to approach it and how to resolve it. So, as far as best practice here, it's counterintuitive, but, start with fewer policies, not more, really. We don't want to overwhelm yourselves upfront with too much information and too many policy to tune. So the the instigs, is definitely to hit the the go go forward button and scan all and look for all, but that can be overwhelming and not allow you to get some more immediate value and build from there. The the problem is these overaggressive policies generate noise, great friction, a legitimate business processes, and a road trust in the tool. Right? It's just too much to consume at at the forefront. And if you have something misconfigured, you're even further, lending yourself to some of that pressure. So start simple. My recommendation is to start with roughly four to five policies that make sense to your organization. What are the crown jewels that your organization is looking, to secure? And where is that information? Right? And tailor your policies to, essentially craft just that and start simple and build from there. So now we've created that label to find the security questionnaire. I wanna show you quickly how to, simply create a policy to implement that. And I'm gonna go back to labels to do one thing specifically that will help in our policies. Under objects approved scope. So this is essentially approving specific areas. We're talking about locations that this data can exist. So where can my HIPAA data live? What locations or or or methods, more so locations rather, can this information exist in? So the first thing that we're gonna go ahead and do is create this target bucket, and we'll just label it SecOps. And then I'm gonna go ahead and add a rule here. And, again, we can daisy chain as many rules as you want. You have the UI, or if you're more comfortable with cell, you can obviously use that as well. So we're going to choose zone and is any of internal, and location is any of s three. So what I'm telling, the system here or so and preparing is my SecOps data should not be outside of S 3. This is a secure repository. Only our group has access to. This is where our security information should live. So I'm cultivating, essentially, an approved zone, and I'll I'll clarify that here, where this information can exist. So I've set up that bucket to be leveraged by policies. So we'll filter back to policies, and we'll go ahead and create a quick policy to capture this information. So first off, go ahead and give it a quick name. And you have the option to add a description, add a framework. Definitely recommend you do that where possible. We'll go ahead and throw on our NIST framework as questionnaires security questionnaires definitely would fall in line. And then we'll go ahead and create our first definition to capture, what is this, unapproved location specifically. Alright. So moving down the line here, we wanna go ahead and instead of any data, we're gonna specify our data type in particular that we crafted. So type is any of, and we'll go ahead and apply that. Obviously, there's a lot of out of box choices. Take advantage of that. Definitely helpful. But this is just to help paint the picture of how you can quickly craft this, using improved scope, using a label that we've created, and start to identify anything that is outside of our protected zone. Alright. So, essentially, what this definition is looking for is our security questionnaire label, and it is looking for that information to be outside of our s three environment. So very simple straightforward policy. One definition, you can layer those on as you find use cases. You can obviously tune, other identifiers in there in in in in fashion. But if you have an or statement between the different conditions, you can go ahead and create your own definition, as well as provide a a list of triage actions in response. So quickly moving on, that is essentially how we can quickly create a policy, to essentially approach our our brand new label that we found looking for that data to exist anywhere outside of the approved location. So moving over to data issue. So once this policy is active, it will begin monitoring any real time information that comes in. If we have a breach to that information, we will have a incident created here. This is your action list. This is the organizational response to the finding. So taking a look here, we've got a number of different data stores, IDs assigned to the events, but also the policy violated. Keep in mind, you've got the ability, to filter any of these items, at the column level, and you'll see filters all over, the product here in many fashions, to be able to capture that as well. So in an event, we graded it with a severity level, the policy attached, and the policy time first and last. So big big piece to understand, is this a reoccurring issue? Is this something outstanding, etcetera? How many findings we have associated with it? Who is the technician we want assigned? And I'm gonna assign this to me, and it is indeed in progress. Any notes I need to add, high level understanding of what that policy contains definition wise, and historical record here to understand the series of events after creation. And the ability to share a a link as well and directly link into the policy itself. In the next tab, you'll have the findings. What are the individual 47 files that have been identified that are, in breach of this policy? And you have the ability to scroll over in the screen to get a quick touch base. But the reality is you're gonna wanna hit the link to open in data, and move over to this pane, where you can get a more individualized view. You have a graph up top to give you some some different viewpoints into the data as needed. You have filters that can be crafted up and and joined and and or logic. And you also have the ability to save these views. So you come in here, you edit these filters, you create some custom filters, go ahead and save it, and it'll save to your profile so you have a quick you have a hot sheet of views that you can leverage to dig into the data. So go ahead and take a look at one of the records and understand what is available for each record that we see here. So you have an automated AI summary. So not only we're we're using that technology to apply labels from many different contextual levels, but also summarize that all of your standard metadata related to the file itself and any of the data labels that are actually detected against the, item in question. Some of this is test data, so we're seeing a lot of data patterns here, but, you have that sensitivity level. You've got that zone. So it's an internal, external, public, as well as object risk where we can adjust the risk association, and essentially where is that information located. Over to the right, we have data copies. This is a big piece in dealing with raw data. So if we've got risky information exposed, in addition, we've also got other items that are being detected as well that may be out there in the ether. So what this will do is it'll look for that, hash of the file. Look across the environment. Any object IDs that contains that hash, you will see that here. So if there is one infraction, you'll understand where additional might lie and maybe if there's need be another policy created in response. And then we have that content report, which gives you the count of each of those labels that are identified in the document itself. So the best practice here is working through these issues is, you know, essentially, we're the intersection of two factors, sensitivity and data and severity, of exposure. Right? Misconfigured location containing generic marketing collateral was very different from the same configuration on a location containing Social Security numbers. Less sensitivity plus exposure, severity exposure severity, be your triage compass. Right? So the combination of those two factors, that should point where your efforts, are are focused, especially initially. As you craft policies, as you start to triage, you start to understand the environment. Because the reality here with with DSPM in general, it provides a heat map of all the sensitive information that you have out in your environment. And then it gives you the ability to start to focus on specific areas and enact visibility and alerting to understand if you have data moving between these barriers that is not permitted. And one tieback I really want to to circle back and and clarify here is, this heat map that DSPM brings. Not only does it help you understand what's out there, it gives you some visibility and some alerting, to the movement and management of said data, but this also empowers your DDR policy substantially. Right? We're not reliant on the telemetry of moving data anymore. We now have an understanding of where it's going. Right? And we have proof. There may be some cleanup efforts. There may be some internal discussions. Maybe we need to change some business processes to address some of this this information, being left outside of its approved location. Right? But, nonetheless, this gives you another perspective of the data that can help empower and optimize your existing DDR policies, and controls today. Last piece I'll I'll stop on here is our Explorer window. Very similar to the overview that you guys know and and likely love, used for threat hunting, but very dynamic to help you start to understand where your risk lives. So immediately after you start to scan, you can come in here to understand, from a data type perspective or a data store perspective, what is really out there in my environment? And all of this, you're able to drill into, to understand what is important. And each of these items all have the ability to drill into that data view, using the filters of of what we are looking at at any point in time. So really allowing you to get down to the nitty gritty, what is important to your organization, and go straight to, the information attained in response. So, again, looking at all that information, pivoting over to the data view, automatically gonna be filtered, and you can start to create saved views. What is important to you? How are you going to start cultivating those policies, and, essentially, what we need, to do as far as step one, two, and three to start setting ourselves up for success. Final piece here is going to be the dashboards, that will be continued to to be optimized, and we'll see additional resources, following this this direction as well. But you have that visibility, that executive level dashboard. How many data stores? How many critical objects? How much data has been identified? And start to understand issue tracking in relation to your policies. So answering that big question of how much data do I have, where does it exist, and what is that, sensitivity and risk associated with that information. So and encapsulating all the efforts when we're making with label creation and policies, filtering into the dashboard, for higher level view and understanding from leadership. So I know we're running out of time here, so I will pivot back here, and we'll start to work towards wrap up. So go ahead And stop share and talk a little bit about what's coming down the pipe and then go from there. So here we have our existing coverage. So this would be what the, connectors look like in addition to what we see here in the immediate future. So we've got Box, Dropbox, Jira coming. But the reality is we're gonna continue to see five to six connectors, coming out of of our team as we move forward, roughly month, quarter, continuing to add to this catalog. Right? Some of the, some of the items of interest, please feed that back through your CSM, and, otherwise, we're we're open to adjusting those those maps, and adjusting the, specific timelines for those features. As far as just feature parity, with some of the other DSPM systems and where we're looking to go, we're working currently through the process of the customer hosted deployment cloud. So if you've got regulatory requirements and we need to house that data locally, we're building a process for that to be available. The sampling of files, which that is referring to almost a clustering behavior, so we're able to scan more and do so a bit more intelligently and efficiently, essentially resulting in, greater visibility on a more frequent basis. And, again, coverage, on prem connectors, box, email body support. So we're looking to expand into these areas immediately and and currently are. But next on the line, we have remediation, and identity. That'll be the next big piece of the puzzle, where our labels will be leveraged even further, pairing identity with those additional context contextual labels that we have today. Google label support, MIP, and the ability eventually to, apply these labels, via findings and policies within DSPM. So I'll go ahead and hand it over to mister Boyd. Alright. A lot of you have probably already heard about the DSPM jump start package, but in case you haven't, essentially, to appreciate our customers, and any new customers who purchase Cyberhaven's DLP solution before May 2026, We're offering a historic, DSPM scan of either Google Drive or OneDrive for free. So it's just just to empower your teams to understand your data with our AI powered, easy to use modern DSPM solution. So, it'll give you a a ninety day look back at your at any changes to your Google Drive or OneDrive data. So that's just to give you a kind of a a hands on, look at, DSPM, and we can always expand the, expand that program into a full on, POV, if desired. But yeah. So if you're interested in participating in the jump start program, please just reach out to your customer success manager or your account executive. And finally, we'd like to invite you to join us for, our upcoming spring product launch on May 5. And we'll drop the link here in the chat. We're gonna be showing you how our new agentic AI security solution gives you, and your team the visibility, observability, and controls to govern agents across the enterprise, and also how, Cyberhaven MCP puts our intelligence directly into the AI tools your team already uses. We'll also discuss expanded DLP coverage for devices that have historically been out of reach, as well as a significant upgrade to how security teams gather evidence for insider risk management. I've dropped that registration link into the chat for you. It looks like we are at time. Didn't see anything come up in the q and a. But there's if if there's anything that was unclear about today's presentation, please reach out to your customer success manager, or account executive. And, again, if you're interested in the DSPM jump start program, please reach out as well. So that should wrap things up for us. We really appreciate your presence today. Thank you so much for your time.