Video: Product Launch: Uniting DSPM and DLP to Secure Data in the AI Era | Duration: 3669s | Summary: Product Launch: Uniting DSPM and DLP to Secure Data in the AI Era | Chapters: Welcome to CyberHaven (9.1s), AI Democratizing Intelligence (254.435s), Data Security Fundamentals (387.41499999999996s), Customer Success Services (586.3050000000001s), DSPM and DLP Integration (750.82s), Comprehensive Data Discovery (1416.3700000000001s), AI Classification Context (1643.425s), Streamlined Data Security (1810.9650000000001s), Unified Data Security Platform (2061.6949999999997s), Data Exposure Analysis (2255.935s), AI Security Insights (2362.0699999999997s), Internal Data Risks (2454.435s), Data Loss Prevention (2564.575s), New Data Security Standard (2664.3399999999997s), Unified Platform Approach (2703.5519999999997s), Addressing Fragmented Data (2833.729s), Data Classification Accuracy (2995.899s), Addressing Alert Fatigue (3139.2439999999997s), Custom Classifier Capabilities (3348.199s), Rapid Setup Process (3487.2889999999998s), Conclusion and Followup (3614.104s)
Transcript for "Product Launch: Uniting DSPM and DLP to Secure Data in the AI Era": Welcome to the future of data security. We stand at the beginning of the intelligence age, a revolutionary era driven by AI, which has democratized knowledge but magnified the risk to your most valuable asset: your proprietary data. Your data has truly broken free, creating fragments that traditional security tools can't track or protect. Today, we'll show you how CyberHaven is fundamentally changing that. We'll start with the realities of this new era, detailing why securing proprietary knowledge is now the ultimate differentiator. Next, we'll share some exciting news about our Data Security Posture Management solution and how it goes beyond the standalone DSPMs that came before. And then you'll hear about our new customer experience offerings and how they partner with you to give you a strategic advantage. Finally, we'll take you through a comprehensive demo of our unified AI and data security platform, demonstrating how you can practically discover and assess risk, minimize exposure, enable AI adoption securely, reduce insider risk, and stop data loss. So get ready to see how you can move from trying to secure data to knowing it is protected. At Cyberhaven, we believe in the power of context. Before we discuss our new products and services, let's frame this moment. Everyone is talking about AI for productivity, but it's so much more than that. We are witnessing a once in a lifetime change in how we interact with the world, and we're just at the beginning. Think about it this way. The industrial age democratized industry with energy and engines. The information age democratized information with data and computing power. Now the new intelligence age democratizes intelligence, enabling everyone to apply the knowledge of infinite minds. This is the AI era, and change is coming from all directions, from top to down, bottom to up. Executives are being pressured to adopt AI. Employees are already using new AI tools every single day. With so much changing so fast, how do you gain a competitive edge when everybody has access to the same foundational intelligence? The differentiator is in the model. It's the data. Success is now defined by the proprietary knowledge that lives inside your company. That means data has never been more valuable or more important to protect. However, if you think about all the places your data lives and goes, all the people who access your data, it's gotten a lot more complicated over time. Protecting data used to be much easier. We had a defined perimeter around employees and files. The approach was simple. Classify, scan, label, and enforce. This was the old world of pure play tools from first generation DLP to DSPM solutions. Then ten to fifteen years ago, everything got more complicated. Cloud, SaaS, collaboration tools, they democratized data. People know this broke the network perimeter, but it also broke the file perimeter. Users don't share files. They share fragments. And you cannot easily label those fragments. Think about the last time you made a presentation. You copied a snippet from one system to a document, and someone else copies that document, edits that fragments of sensitive data multiplying, flying under the radar because they have no metadata, no labels. In fact, our research at Cyberhaven Labs found that over 80% of exfiltrated data is fragmented. Strategic plans, acquisition names, bits of customer information, and not complete files. Today, AI democratizes intelligence, but they dial up the risk by 1000x. AI creates exponentially more fragments and derivatives of data with new vectors for data movement. This is exciting because it removes constraints. However, it also magnifies the human risk. One person can now act like an entire department. Plus AI agents introduces new human like risks. Just as cloud computing democratized access to resources, AI has democratized access to knowledge. The result of all of this is that data has truly broken free. How has the industry tried to secure data before? Think about this. Data access relies on systems and people. With systems, the units are data and resources like the cloud, the networks, and devices. Most of the investments have been on protecting those systems. Protect the cloud, protect the network, protect the device. On the people side, the unit is knowledge. To truly secure knowledge, however, you need context. AI agents are the newest variable. They affect both systems and people. They move data at machine speeds, making the security loop from policy to action to feedback tighter and faster than ever before. As you can see, your data is moving through a lot of different places, and there are a lot of tools to try and secure it. One should ask, how well is that working? We can all see the issue. It's been a very siloed approach. Data slips through the cracks and out the door all the time. Against this backdrop, we chose a fundamentally different approach. Our goal is to enable you to protect your data wherever it lives and wherever it goes. There are three keys to protect data. First, you need holistic visibility and control. You need security that spans the entire life cycle. Where data lives, where it goes. Our customers can trace the journey as data moves between cloud applications, endpoints, and on prem data repositories. We provide end to end visibility of data in motion and comprehensive view of data at rest in a single pane. Secondly, it is important to have that deep understanding of data itself. On top of that lineage, it is important to use the best of AI classification to understand what data means in context, who interacted with it, who has access to it, and who owns it to get that deeper understanding. Finally, a security must be easy to operate. It has to be quick to deploy, simple to tune, automated, and should not disrupt users. Otherwise, the solution becomes fancy shelfware. And this is where a lot of traditional DLP and DSPM solutions have struggled because you are context switching or you have disruptive endpoint agents or you have to update the same policy in multiple places, not to mention the frustration from false positives and false negatives. Holistic visibility with deep understanding of data makes security solutions easy to operate. It is with those tenants in mind we set out to reimagine data security for the AI era. So far, we have helped many of the world's largest and most innovative companies stop data loss, reduce insider risk, secure AI adoption, and now we are helping them discover their sensitive data and reduce its exposure end to end. Today, I am excited to announce Cyberhaven latest advancement, the general availability of our data security posture management solution. As you will hear shortly, Cyberhaven DSPM provides holistic coverage, an AI driven understanding of data, and an effortless experience. But that's just the start. Unlike other DSPM products that are standalone tools or unified in name only, we built our DSPM as a native, truly integrated part of our AI and data security platform, unifying DSPM with our market leading AI security, insider risk management, and data loss prevention solutions. No longer security teams have to switch between separate tools, nor do they have to settle for seeing only a part of picture. Data easily moves from storage to applications to endpoints. And now with Cyberhaven, we provide end to end security for your data. We are very proud to bring that market leading solution to our customers. As the vice president of customer experience, I lead a team of customer success, onboarding, and professional services experts who are relentlessly focused on your success. We ensure that you're maximizing the value of Cyberhaven across a number of use cases, whether that's insider risk, data loss prevention, AI security, or data security posture management. Our professional services offerings help you build a mature, effective data security program. Customers start with deployment and onboarding, where we take them through setting up their initial data security use cases. This includes working sessions and training to ensure that you're production ready and you're familiar with all the features of the CyberHaven platform. Today, I'm excited to announce a new turnkey services which handle that initial setup for you. Whether you're just stepping into data security or you need to lift and shift from a legacy data security tool, our team of experts have you covered. Our turnkey services ensure you're migrated and running in record time so you can focus on running an increasingly effective program. But we don't stop there. For our complex environments, our technical account managers become your trusted advisors, bringing deep expertise to ensure smooth rollouts of policies, features, and upgrades. Our analyst services provide you with regular threat hunting and business reports for your environments complete with recommendations for existing and new policies. As insider threats become more and more advanced and XML vectors grow exponentially with AI adoption, our customers wanted to be able to stay ahead of the trends and ensure their data security program is covering all these emerging vectors. We listened. And today, we are introducing the Insider Risk Intelligence Service. Backed by our Cyberhaven Labs research team, we're continuously hunting for new insider threats, both from public sources and from the trends in our own customer base. Armed with this data and research, we will help our customers establish an insider risk framework within their organizations, providing ongoing updates on these emerging threats and trends and ensure their environments are protected against them. With Cyberhaven, stop guessing if you're covered. Know that you are. Hi. I'm Aman Sarohi, the Security Officer at Cyberhaven. Security practitioners and CISOs alike have one common threat, which is how do we assess our organization's risk? This is important to us because we live in a world of unstructured data, and this data is our responsibility to protect. We invited security practitioners to come on and have a hands on experience on actually understanding the unified data security platform that we've built. Let's see what they have to say. Data security is integral to basically, you know, all company security in that, you know, you're protecting IP, you're protecting the company's value. And, you know, if anyone can take it, then it doesn't, know, your data doesn't have any value because all your work belongs to somebody else at that point. DSPM fits in our for our situation in that one of the most important things that it does is allow you to have a map of where data is and who has access to it. So if you can understand that, then you have a much easier time creating those walls and segmentations, as well as just understanding your environment. Because, you know, there's always a new SaaS app, there's always a new service, and people want to put data into it. And if you don't have a good understanding where your data is going, then you can't protect it properly. A lot of existing DSBM tools are very like conventional and, you know, previous generation. Well, now it's about understanding the actions and not necessarily just the data itself. With Cyberhaven, I can integrate it on the back end to things like Exchange, Slack, SharePoint and so on. And then it can go out and basically give me actionable information. So it's like your data is here. This is the type of the data you have. And then these are people that can access it. And then I can go through and kind of instead of spending time configuring things, can be like, okay, I need to know I know what the most important things are and where to take action on those things. You need context to make a decision. So if you don't know why that particular action happened, you can't make a quick decision. So when you have like data lineage, can be like, the file was downloaded from here, made all these modifications, and then it got exfiltrated here. So now you basically have the full chain of actions, so you can very quickly come to a decision on what actions to take. Is this malicious or is this benign? So that that daily lineage is kind of key to that. Understanding the origin of your data is critical to understanding, you know, where it's going. And I say that in the sense of like, you'd want to know that internal data is going out and, you know, or maybe somebody is pulling public data in that they shouldn't be. So having, you know, both origin and destination is critical to the picture of DLP. So with DSPM, like frequency of the scan is, at least to me, kind of critical. Like, you you know, people are always inside the files and this person can change permission on a folder. And, you know, if it's available for twenty four hours, that's a huge amount of damage that can happen. So being able to see that, you know, with a frequency of every 30 minutes or every 15 minutes is critical to me. So Cyberhaven approach of combining DSPM, insider security and DLP is it's kind of natural in that all those things are essentially related. Like, you need to know what's happening in the cloud and at the same time, you need know what's happening on the endpoint. If you only know what's happening on the endpoint, then you have no idea what people are, you know, how your data is being exfiltrated in the cloud because you're just watching the endpoint. So marrying those two points, those two sources of information is kind of critical to having correct DLP. I don't necessarily want to just say DSPM is kind of its own thing. I feel like, at least for me, DSPM is really under the whole idea of DLP. You need to understand where your data is and who has access to it and where it can go on the endpoint and in the cloud. For DLP and DSPM to be effective, they have to work together because they're both looking at two different domains of storage, if you will. And if they don't talk to each other or there isn't a platform that combines both those sources of information, then you can always have a gap. First moment for me with DSPM, I would say it's seeing the network map where it showed you the data type and to where it was stored, and then being able to also match that to a file on the endpoint. I was like, wow, that's actually pretty impressive that the file hash here is exactly the same as the file hash here, and I can trace where it started on the endpoint and where it ended up in the cloud. And I realized at that point, I was like, this is a pretty powerful tool, and it's gonna become critical in being able to manage and protect our data. ESPN will be able to help us understand where exactly is data, where exactly is sensitive data, and are we able to rightsize our controls to make sure the controls are being applied. If we think about the industry, there are a lot of players that are only doing VLP. There are players that are only doing DSPM. And then almost everybody claims they have some kind of an AI today. And I'm I'm confident in saying that probably Cyberhaven is the only one that really brings these two core product features together to make a, you know, whole solution very compelling and very, very powerful. And as a DLP customer, right, I've always relied on Cyberhaven's, you know, different approach of how it solves DLP aspects. I've been, you know, incredibly benefited with how you guys able to leverage AI to make DLP detections better throughout the years. And AI is sort of a game changer for DSPM. You know, we're going from 10 x to a 100 x, you know, using AI in all of these areas, not just the technology space, but also the business units, whether it's legal, marketing, sales. Everybody's using AI. And guess what? AI needs data. Provenance was a somewhat of a new concept in the BLP world. And when you guys brought it to the market, that that was a differentiator. Right? Provenance was something that we could use in a DLP that no other vendor provided. And when we feed this context into tools like DLP and, you know, DSPM, it is going to be a game changer in the sense of, you know, answering all of these questions that are actually truly mean to the business so that we can take actions to mitigate the true risk and actually increase the potential of these DLP and DSPM tools in the real world. If a DSPM solution only looks at this data once a quarter, once a week, once a month, it is going to provide some stale answers, right, that we can't really use in real time. Especially in this day and age of AI, Old data is almost as good as having no data. Right? So the currency is very, very important in this day and age. As soon as I walked into Cyberhaven and the first thing I saw was how it visualizes all of the data that it has collected or gathered. And it was using, you know, specifically bubble short representation. And that stuck me immediately because, that's how I think through in my head. And that was clearly an moment for me because, look, that is a very complex picture to portray. And Cyberhaven did that intuitively. So it was incredibly powerful in the sense that I could start applying or using the tool within the first few minutes. I would use one one word or two words for that, which is white glove service. Cyberhaven is always eager to help customers like us. Cyberhaven has been critical in our ability to build a mature cybersecurity's program and to bring the visibility to the other team members that is needed. The other portion of this is my opportunity to be able to help guide and train as we are really building posture as a whole for the company. And culture and posture don't just happen overnight. It's repetition, and it's the ability to continually work with those users along the way. So as I use the solution to help guide and train them along the way, that's how we're gonna build that posture. And so I think it's gonna be a magnificent opportunity to use these other signals and these other elements to help drive that as a whole. So it really gives me the comfort when I'm relaying to the executives and to the other business units where our data really is and where our risk really are. Of the many things I love about Cyberhaven as a whole is the ability to track the lineage and bring that that whole history. It's it's it's been lifesaving in in many occasions. The providence that Cyberhaven provides is critical to building your data program for AI or for any cybersecurity measures. You know, if I if I'm waiting till a quarterly scan to find out that everything, you know, got leaked, well, that's great. But now it's way past my opportunity or even maybe my commitment from a contractual or from a regulatory perspective to reply or to make a claim or to notify the appropriate people. But I think as we consider bringing AI, AI is definitely real time. It's it's really life changing from a security perspective. You you try it out. You won't be disappointed. So, Cassie, it seems like if you talk to security teams today, one of the most common frustrations you hear is that discovery sounds simple in theory, but breaks down in practice. A big reason for that seems to be coverage. So where do most DSPM tools fall short? The biggest issue is that most tools simply don't look broadly enough. However, our coverage extends across endpoints, cloud and on prem environments. Traditional DSPM tools focus heavily on and center around public cloud platforms. They will scan AWS S3 buckets, Azure Blob Storage and Google Cloud Platform even, and they will call that comprehensive. But most sensitive data doesn't start in the cloud. It starts on the employee devices. Documents are created on laptops. Spreadsheets are created on desktops. Source code is written on developer IDE workstations. That's where the data originates. If you're only scanning cloud repositories, you're blind to the source of much of your data. We provide visibility across cloud data stores on prem databases and file shares and endpoints all in one platform. So that gap that they have, that becomes more obvious once data starts moving, right? Exactly. A sensitive document might be created on a laptop, uploaded to OneDrive, exported to S3, and shared through Slack. If you only scan part of that journey, you don't really understand where you have your data risk points. Our coverage also includes SaaS tools many DSPs ignore such as Outlook, GitHub, Gmail, Slack, Teams, many more, not just file storage, but the messages and the conversations where data actually moves between people. And it seems like another place that tools, struggle with is how and when to scan. Right. We give customers flexible scanning options that adapt to how their organization actually works. Traditional DSPMs tend to run on rigid schedules. They might scan annually or quarterly, which means they are always looking backward, but the risks are created every day. Our scanning architecture supports both forward scanning and historical scanning. Forward scan lets us identify new or modified data immediately as it happens. Historical scan lets teams go back and audit existing content based on evolving risk and compliance needs. Configuration is contextual to each app. OneDrive is configured by users, Slack by channels, SharePoint by sites. That prevents blind spots caused by structural mismatches. You can also apply granular controls like file size limits, file types, and part level inclusion and exclusions. The goal is not to scan everything blindly but to scan what matters the most to the customer while reducing noise. And you know that leads to another blind spot which is redundant data. Raw data, redundant, obsolete and trivial data is something most DSPMs miss entirely. They may scan multiple cloud repositories but don't realize they are looking at the same file again and again. You think you are seeing 500 sensitive files but when it's really 50 files copied 10 times. We identify exact copies across environments. Instead of fragmented snapshots, customers get a true inventory of unique data and where duplicates exist. Traditional tools say you have X sensitive files. We say you have X unique files with Y redundant copies across Z locations. So, Chris, AI classification is everywhere in data security right now. Every vendor claims they have something special, but it seems like in practice, the results vary a lot. So what do you think most tools get wrong about AI? Well, not all AI classifications are created equal. The difference comes down to context and understanding the AI has as it makes its decision. Most tools work from snapshots. That is, they look at data at a single point in time, applying pattern matching and AI based evaluation to assign a label. But they only evaluate data in isolation. We operate fundamentally differently. Our data lineage captures the rich and complex history of how data has organically moved and evolved throughout an organization. And we use that lineage to build a complete history of every piece of data. Every time something changes, whether a file is modified, moved, copied, shared, we have that history to give AI the full context it needs to best understand and classify the data. So it sounds like then that classification improves as the system learns more about the data. That's right. When we classify data today, we're not just looking at the current content in front of us. We know where it came from, how it evolved, and who interacted with it. Take, for example, a spreadsheet called Untitled One that contains dates and financial numbers. In isolation, this will look like unknown financial data. But our unique insights captured in Lineage will help us better classify the data. This data could have been copied and pasted from a CSV file that was previously exported out of a corporate Salesforce instance. Or it could have been copied and pasted from a PDF that was downloaded from an employee's Gusto account. This full story of the data is crucial to say whether this is sales projection data or an employee's payroll data. And so then that context also helps separate the real risk from the noise, right? Absolutely. One of the core classification signals we look for is provenance, which is the ownership context of the data. In that previous example, knowing whether that financial data and dates in that spreadsheet were corporate sales information or an employee's personal pay stub information is critical to separating real risk from noise and reducing false positive. The same goes for source code in an s three bucket. It may look risky, but if it originated from a public GitHub repo, maybe it's not a concern. Whereas if it's corporate source code from an internal repository, it could be a real risk. And so teams don't need deep technical expertise to use this capability? No. Traditionally, custom classification required complex rules and constant tuning that took manpower, skill, and time that organizations don't have. But we've removed that barrier. With our AI classification, teams can simply define what they care about in natural language. Customers describe what they wanna discover and protect, and our AI does the rest. So Harry, even when teams have coverage and intelligence, it seems like many still struggle day to day. And a big reason for that is the user experience. So tell me, why does this matter so much? Well, data security teams are responsible for protecting all the edge and data through its entire life cycle. So we think about it, we have to manage security posture, support compliance, prevent data loss, and mitigate inside risk. And among many other tasks they have to do, so they are extremely busy. The problem is the current data security vendors, traditional one, they force these teams to jump between different tools just to understand its data landscape and actionable insights. That is if they have the visibility in the first place. In other way, these tools add another layer of complexity to the security team's life and becomes a constant hassle. And yeah, more tools doesn't necessarily mean better understanding, right? Exactly. In the Cyberhaven platform, we bring everything into a pane of glass, not just a basket of multiple windows. You don't need to jump between tools to just understand where data lives. And more importantly, you don't want to manually stitch in different information from different sources. Our goal is very simple, to reduce the time a security team spend on tools and reduce the number of tools they have to learn to use. So you think it this way, less time jumping between tools means more time protecting your data. But seeing everything in one place, I mean, that's still not enough if the teams don't understand what they're looking at, right? Right. The challenge isn't collecting and showing data points. Anybody can do that. What matters is connecting the details, giving them context, and extracting insights that security team can actually act on. So for example, with Cyberhaven platform, security team can see what the data is, why its security matters, where it resides and comes from, and who is accessing it. So instead of just saying, Hey, here's the file that having financial data, our system will show you that this is a corporate owned finance slide deck. It's stored in a personal location and can be accessed by a trusted party. So think about every detail by itself is just information, but connecting them together, it becomes a true insight. And another thing is the investigation time. That's another area that teams often complain about, right? Right. Without data lineage, teams have to manually reconstruct all data moved. This will take even days, sometimes even longer, and is often prone to mistakes. Without data lineage, the system connects the dot automatically. So let's say a file appear on the data, on the endpoints, and they upload to Google Drive, export to a cloud bucket, and download and copy to a spreadsheet. We track and show the entire journey. So now the data security teams, they don't just need to play detective all the time. They can see the true full story instantly. And that's really the outcome that teams want, right? You're right. Cyberhaven DSPM moves security team beyond just accessing a sea of data points. Now they can instantly understand what's happening and actually take action with less hassle and more certainty. To be honest, that's exactly what every security team wanted, an effortless experience. Most security programs were built for a world where data lived in a few well defined places. File servers, a handful of databases, and maybe a couple of cloud repositories. But today, data is everywhere, and it's constantly moving. It gets copied, pasted, and transformed. It flows across SaaS apps, collaboration tools, IS and PaaS environments, on prem systems, and now into and out of generative AI. Usually, that's with very little visibility. And that makes some very basic questions surprisingly hard to answer. Where is our sensitive data? How did it get there? Who's actually using it? And where is it going? What I'm going to show you is how Cyberhaven addresses these challenges with unified data security platform built entirely in house, designed around a concept that we've used for years. Cyberhaven acts as a flight recorder for your data, continuously capturing activity, stitching together lineage across files and fragments of data. Everything starts with discovery, but discovery has to mean more than scanning snapshots of your data at rest a few times a year. With Cyberhaven, we continuously discover and classify your data at rest, in motion, and in use in near real time across your endpoints, cloud and SaaS applications, IaaS and PaaS environments, and on prem systems where data still lives for many enterprises. We of course support traditional content inspection techniques, things like regular expression patterns, dictionaries, and exact data matching, but we take classification a few steps further. Cyberhaven uses AI based content understanding to extract the actual meaning of your data. Instead of asking, did this match a pattern? We ask, what is this data? And while Cyberhaven offers a robust and growing library of predefined classifiers, we know data security truly is not one size fits all. InfoSec teams can't rely on black box classification models. They require flexibility to tune these classifiers, as well as define their own unique classifications tailored to their specific business and data security challenges. We expose our AI classification logic to our customers, and we allow them to define new classifiers using natural language prompts, which can be deployed within minutes, not messy regex or complex rule sets. And while AI classification is a recent game changing development, leveraged by data security vendors since the rise of LLMs, our true differentiator comes from lineage, enabled by metadata that's unique to Cyberhaven. Because we track where data originates and every subsequent movement and transformation down to fragments of data, we're uniquely positioned to understand provenance. Who owns the data, where it came from, and why it exists. That allows us to differentiate between a financial document that originated in your data warehouse or BI platform, was edited by the CFO, and contains internal intellectual property meant only for a select audience, versus an employee simply downloading their W-two from TurboTax. That level of context becomes the foundation for everything that follows. Once you understand what data you have, the next question is, how exposed is it? In the real world, most data loss doesn't involve someone stealing an entire file. It happens when small pieces of sensitive data leak into places they don't belong, copied into chat tools, pasted into tickets, uploaded to shared drives, or duplicated across systems. In Cyberhaven Explorer, you can see all of your sensitive data in one place and pivot across it using multiple dimensions. First, data type, driven by AI based content understanding. Feeding the content to an LLM to understand what it actually represents. Second is provenance, which is also AI driven, our understanding of ownership and origin based on the full set of contextual metadata. Who created the data, which systems it came from, How it was transformed? And how it moved? Alongside that, we still apply traditional techniques, patterns, exact matches, proximity, where they make sense. All of these signals roll up into a higher level sensitivity label, which reflects both content and ownership. From Explorer, you can drill into the data catalog where filters carry over. For each item, you see the applied labels, key metadata like creator and location, and concise AI generated insights explaining what the data contains and why it matters, so analysts don't need to open the file to assess risk. We also surface data duplication, showing where the same content exists across endpoints, cloud, and collaboration tools. That makes it much easier to identify unnecessary exposure and clean it up. Now let's talk about AI, because this is where many security teams are under pressure. Employees are using generative AI tools every day, often with corporate data, whether there's an official policy or not. Visibility is usually the missing piece. Cyberhaven provides that visibility in two complementary ways. First, we show you which generative AI services are being used, by whom, and what data is flowing to and from them. Second, we layer in context using Cyberhaven proprietary RiskIQ register. RiskIQ is our continuously maintained profile of known generative AI services. We assess factors like authentication models, data handling and retention practices, regulatory posture, and overall security risk. So when you look at AI usage inside your organization, you're not just seeing activity. You're seeing risk aware insight into which AI services are in use and how risky they are in the context of your data. When you drill into a specific event, you see the full lineage, where the data originated, whether it was internal or personal, how it was transformed, and whether it went to a sanctioned or an unsanctioned service. We also use AI to draw insights into something immediately actionable. What the user did, what the data contained, and why it matters. The goal isn't to shut AI down, it's to let teams use it productively while keeping sensitive data out of high risk tools and accounts. Not all risk comes from outside attackers. A significant portion of data loss happens internally, sometimes accidental, sometimes not. Common scenarios include an employee preparing to leave starts downloading sensitive files, data is renamed or obfuscated to bypass controls, or information quietly moves into unsanctioned storage, applications, or workflows. This is where Cyberhaven endpoint visibility becomes critical. On endpoints, Cyberhaven continuously observes how data is accessed, modified, and moved, not just on disk, but as it flows through the user's real workflows. That includes file operations like uploads, downloads, copies, renames, compression, extraction, and transformations across local disk, attached network storage, and physical media like USB and printers, as well as movement into web browsers messaging and collaboration tools over AirDrop and Bluetooth to external devices, command line, developer applications like Git and Versus Code, and through emerging channels like MCP and agent to agent workflows. Because Cyberhaven acts like a flight recorder for your data, all of this activity is captured with full context. When something happens, you don't just get an alert. You see who did what, what data was involved, where it came from, how it was modified, and where it went. And when deeper investigation is required, Cyberhaven provides optional forensic evidence capture, including copies of files, clipboard content, and even screen recordings around the moment the risky activity occurred. That leads to faster triage and clearer decisions. Is this normal business activity, or is this something that we need to act on? Once you're ready to take action, Cyberhaven helps you prevent data loss without guesswork. Before enabling a policy, you can test it against historical data and events and ask, If this policy had been active over the last thirty days, ninety days, or more, what would it have impacted? You can review the exact events it would have triggered, tune it, and validate efficacy before enforcing anything in production. When a policy is active, you can choose to monitor, educate, warn, block with override, or block. If something is blocked, the user is going to see a clear configurable message, not a confusing error, and they can provide feedback. On the back end, you get full forensic detail. And importantly, we use AI to reduce noise. For example, a policy might flag source code going into an AI tool as severe, but our AI can recognize that it's a trivial snippet, not proprietary intellectual property and deprioritize it. That's how teams dramatically reduce false positives and focus on what actually matters. Cyberhaven also integrates with your existing workflows via APIs, SIM, orchestration tools, ticketing systems, and more. So prevention naturally fits into how Teams already operate. And that's Cyberhaven, modern data security built for how data actually moves today, with visibility across your endpoints, cloud, and on prem. AI driven content understanding and ownership through lineage, risk aware insights into generative AI usage, and a direct path from discovery to prevention. The intelligence age is here, demanding a complete shift in how you secure your proprietary data. With the general availability of our unified platform and its new DSPM solution, Cyberhaven fundamentally solves the fragmentation problem, delivering the holistic coverage and AI driven context you need. Stop stitching together old tools. Stop guessing if you're covered. Start knowing you are protected. This is the new data security standard, and it's available now. We're getting a lot of questions, thank you, Chris, but we'll not be able to go on to all of the questions. So if we are unable to cover some of the questions you asked, do reach out to AEs and CX, but we'll cover as many as we can in the time we have together. So thanks for taking time attending the webinar. One of the questions that is coming a couple of times in the q and a is lot of vendors claim unified platform. So what makes Cyberhaven approach, you know, unique or different compared to the competition? So that that's a very valid question and important question. A single pane of glass, if I had a penny every time I heard that in my last twenty years. Right? But I want to emphasize, we went back to the basics to build the the platform ground zone. So internally, we call it next generation platform, but what it means is a unified engine with whether it is labeling, whether it is policy. We went back to basics as simple as creating the data model, understanding what are the characteristics of data at rest. What is the characteristics of data in motion? If you have a unified architecture, how do you address all those use cases where the data is on the endpoint, whether data is on prem data repositories, whether it is in the cloud, whether it is moving across the endpoint from endpoint to cloud or cloud to the endpoint so that customers have a consistent intuitive experience independent of whether they are looking at DSP and DLP, insider risk management, or AI security, or multiple facets of these together. So one place to see all your data together, ability to slice and dice of the critical components or data points that are interesting to you, and that allows us to basically really differentiate ourselves from the competition. It's not just that it is unified single pane of glass. It is unified data model. It is unified labeling system. It is unified policy that works consistently across all these use cases. Chris, you wanna take the next question? Yeah. Next question is someone mentions that 80% of exfiltrated data is fragmented rather than complete files. How can we use Cyberhaven to address this problem? So just painting the picture of what what the issue actually is. Traditional security tools tend to focus on files, both DSPM and and, traditional DLP tools, you know, scanning data at rest, files at rest, scanning, files as they traverse the network, scanning files as they are meeting some sort of specific criteria, maybe defined in a policy on exfiltration, at that point in time. Consider scenarios like, somebody, you know, taking a a fragment of data from a Word document or a PowerPoint presentation or, a BI platform and copying and pasting some of that into your internal project management solution like Notion or Jira. Right? Everybody in the organization has access to Notion. One of your users that maybe shouldn't have access to a particular workspace grabs that sensitive proprietary information in the form of that snippet, copy paste it somewhere else. Maybe that's a file. Maybe it's directly into, an LOM in a in a web browser. Maybe they paste it into a message and send it over Slack and Teams to another, person in the organization. At the end of the day, the reality is most knowledge and data and information that's traversing through modern organizations, are not full files anymore. And so if you're relying on that, it's really challenging. So what we're doing is we're monitoring all data in use, all data in motion, all data at rest, all of the time. And as soon as somebody copies and paste that fragment of data from, the source to its destination, we understand all of the contextual metadata. Who did it? What's their role? What was the source of the data itself? Was it in the the the top secret folder in your corporate SharePoint? We inspect the contents of that clipboard, for example, And we can do, like we mentioned, your traditional content inspection as well as AI categorization. And we're able to stitch it together as it traverses. Right? So we we better understand the data because we have all of that lineage. We have all of that context, and that allows us to identify fragments of data, and catalog those as opposed to files as a whole and stop those fragments of data or monitor, it's configurable, right, from being moved, to an unsanctioned destination or used in a manner that's inappropriate. Next one for you, Prashant. Alright. I'll take it. Thanks, Chris. So couple of questions. I'll combine those altogether. The current DLP solutions are very often misclassified. They misclassify the data. Would Cyberhaven help improve that? How can you ensure accuracy? Some of the questions related to road map capability. So I'll I'll touch a lot of these all of those here together. So we analyze the content, and more importantly, we understand the context, so not just patterns. So this is what Dave and Chris went in, some of these details. We understand provenance, so where that data came from, whether it is a corporate account or is it a public repository. Document with all the Social Security numbers, bank account numbers could be a user's tax returns. On top of that, that's why we use AI with semantic understanding so we know what data really represents. So to give you a take a step back, we understand we started with lineage on DLP. So we have deep understanding of data and as it end to end journey of that date data as it moves. So that allows us that deeper context of data. With DSPM, now we have end to end visibility as data moves across endpoint, from endpoint to cloud, or on prem data repositories, or from cloud to the endpoint. So this gives us much deeper context. AI semantics ownership, what is the content of the document, and that's what allows us to cut down false positives and false negatives. Because at the end of the day, that's what our customers want. They want basically to be looking at real things prioritized accordingly to the risk what they want to, look at. We'll go into some more questions, but I wanna take an opportunity to emphasize that this is the GA launch for our product capability. We have a dedicated team that is continuously adding lot more capabilities. In interest of time, we'll not be able to go through each and every of those, but we'll definitely encourage you to talk to your AEs or CX team. The PMs would love to have that conversation. Chris would have loved to have that conversation with what we do to support today. What are your use cases? What our roadmap capabilities look like? And we'd love to go get into those deeper discussions. I'll let you take the next one, Chris. Yeah. There's quite a few. How do you address alert fatigue? There's a few ways. Right? Typically, alert fatigue is going to be caused by something improperly configured or something that's ineffective at classifying data accurately. So I think a lot of it might have been doing DLP and data security in production enterprise environments for almost fifteen years now. Almost always, there's, an approach that is ineffective at the classification of the data itself. Right? So you have a policy that says PII can't go to, webmail. Well, somebody uploads their w two. It contains Social Security number, and that trips the the DLP alert. What we're doing is is we talked about context a lot. That context, all of the metadata, right, where did that file come from, which user created it, in addition to feeding that context into, our AI analysis engine, we're able to reduce false positives. So I there might have been a scenario, that we mentioned earlier in the, the webinar, where a user tripped the policy for putting source code into a personal chat GPT. Understanding the the context of that data, that source code originated in an internal proprietary GitHub repo versus a public GitHub repo, layering in additional context for what was the actual snippet. Is this something that is actual intellectual property, and it aligns with product designs, or is it, some snippet of something that matches source code based on a Regex pattern, that is not sensitive to the the company? We're feeding all of that context, and our AI engine assigns a severity to any policy violation. So if something trips a policy, AI can either increase the severity if we believe that policy was, you know, set lower than it should have. And on the flip side, it's also going to set things that you don't need to necessarily consider or deem important to a lower informational priority. So when analysts are triaging incidents, one of the first things that that they do is, you know, flip off all of the incidents that matched informational. And what you'll find is that about almost 70% of incidents to review go away. So you can triage the ones that maybe match the policy as well as AI deemed as being important. Additionally, what we do is we are surfacing incidents that are not defined in policy altogether. We we, you know, baseline data usage within your environment down to the data level, the user level, the department within the organization level, as well as the the whole umbrella of the organization itself. And we understand when data is being used in a way that is anomalous. And when we detect an anomaly, we send it for additional assessment. What was the content? What was the context of the event? Basically, feeding all of the the known context that we have about the event to our AI analysis agent, and it will surface an incident that was not matched in a policy, for example. So it goes both ways. A few more questions. Go ahead, Prashant. You're on mute. I will take the next two questions. Thanks, Chris. So two questions. One is, can I build my own custom classifiers? So I think absolutely. So that that's a great question. You get lot of the out of the box classification capabilities where the DSPM solution, you can tune that to what your needs are. Right? And these are natural language prompts. So you basically define the prompts. You don't need to train these models with a lot of your data, but what you can do is, here is my prompt, put some files in and see, you know, prompt is working according to your needs or not. And over time, you you understand that there are some inclusions or exclusions needed, very easy to operate those. What I want to emphasize is no rejects expertise needed for this. Very easy to operate, and you would love to have you see the demo, see the product in action, run the POC. Second question is the licensing and how to buy. So DLP and DSPM products are not dependent on each other. Customers can start their journey with DLP or DSPM. If they have both the products together, it it gives you more visibility as data, as I said, move from endpoint to cloud, cloud to endpoint. We are able to correlate the activity that we see from the endpoint. That means we can uncover activity in your SaaS apps that are coming from unmanaged devices. We give you lineage, end to end lineage as data traverses. So those additional use cases come to fruition with work together, but they don't need each other. Another question was, basically, does the solution require in network insertion, or how does it work in the hybrid infrastructure scenarios? So we definitely support hybrid infrastructures that customers might have. So at GA, we are launching with SAS, IS, PaaS support, and on prem capabilities as well. We designed it to support your end to end data visibility needs for both data address, data in motion. We hook into all of your cloud apps irrespective whether they're SAS IS pass via APIs, and that that's how we are able to provide you visibility and mitigate the risk. Chris, you wanna take the next one? Yep. And I think we we got about a minute left, so this will probably be the last one. What is setup like? How fast is it to get it up and running? Incredibly quick. You know, you select your your connectors that you would like to onboard from from a roster of connectors, I s pass and and SaaS. So you might select, you know, Slack, Google Drive. You might have, Snowflake, integrated exchange. All of the connectors have historical data arrest scanning as well as go forward scanning. Prioritize your high priority data stores typically, and, you have configure, configurability around, do you wanna scan the whole thing? Do you want to scan particular file paths or file types or file sizes using include exclude logic? And you log in with an account that has the appropriate permissions for the given service, and you're done. You click save, and all of the AI classifiers, all of the AI policy policies are all prebuilt in out of the box, and the scan starts immediately. It will traverse all of the data, that already exists at rest. And as new data is created or accessed or modified within those target services as well, we scan it, within minutes on a go forward basis. So that will also surface itself within the environment. We we do have customers. Right? We we, you know, have been offering endpoint DLP for for years. This is a flywheel, the insertion point, starting with DSPM versus DLP, is a discussion and depends on your needs and your use cases. But many of our customers, love our approach of data in use discovery on the endpoint, which is your users are going to lead you to those sources of sensitive data that they're accessing. And if you don't know which data source you'd like to prioritize, they will show you the ones that they're interacting with every single day, and you can ensure that you layer in classification and control around the data that's most at risk, which is the data that's in use and in motion, by your users as opposed to data at rest within your your, data stores. I think we're up on time for Sean. Any more questions on your side, is. that it? I think that is it, Chris. Really appreciate everyone's time. We do have some unanswered questions. We will get back to you on that. So please make please stand by in a day or so. CX teams will be reaching out to you. Make sure that all your answers are there, available from us. I really appreciate everyone's time and interaction. Thank you, everybody. Reach out if you'd like to see a demo or learn more. Appreciate your time.