Video: Cyberhaven Unlocked: Linea AI for Analysts | Duration: 2552s | Summary: Cyberhaven Unlocked: Linea AI for Analysts | Chapters: Introducing CyberHaven Unlocked (9.04s), Incident Analysis Demo (201.57s), AI-Powered Incident Detection (727.34s), Context-Aware Classification (1286.955s), Linea Implementation Strategy (1562.6599s), Unmatched Traffic Detection (2424.86s), Closing Announcements (2459.17s)
Transcript for "Cyberhaven Unlocked: Linea AI for Analysts":
Hey. Welcome, everyone. By way of of introduction, my name is Amy Gurell. I'm the director of professional services here at Cyberhaven, and I wanna thank you for joining our first webinar in our new series, Cyberhaven unlocked. Today, we're gonna show you how Cyberhaven Linea AI helps security teams move from manual review and policy tuning into a world where AI agents handle the heavy lifting, finding the unknown risks, explaining what happened, and helping you respond fast faster. I'm gonna review a little bit about Linea AI, and then I'll pass it off to one of our senior data analysts, Nick Dobbs, to walk through the console with some live examples. At a high level, Linea AI delivers three outcomes for every customer. It helps analysts resolve incidents faster by summarizing what happened and why. It detects unknown risks that policies might miss using our data lineage intelligence to see how information normally flows. And it prevents future incidents, giving you context to refine policies so that you're not chasing the same alerts twice. And when we say AI agent, what we really mean is a set of capabilities that act on your behalf. They never sleep, they never forget, and they're tuned into how your organization actually moves data. Under the hood, these results come from three interlocking components. Cyberhaven large lineage model or LLIM is unique to each customer. It learns your data flows and flags behaviors that don't fit. The policy engine, which uses your rules and datasets to define what matters. And the LLM analysis layer, which reviews both policy incidents and anomalies to assess severity and produce a clear natural language summary. The key takeaway here is that Linea AI operates within your tenant. No data leaves your environment, and nothing is used to train external models. We describe Linea as having two AI agents working side by side. The detection agent, which reviews every event and applies what we call semantic risk detection, reasoning about the meaning of user actions to find real threats and cut false positives, and the analyst agent, which automatically investigates confirmed incidents, analyzing context, screenshots, and behavior history to tell you what happened, why, and what to do next. Together, they automate the manual work of a tier one analyst. So those are the three pieces working together, detect the unknown, analyze the known, and learn from every case to make policies smarter. Now to take us into the product for examples on how those agents actually work day to day, and most importantly, how to use Linea as an agent or as an analyst to investigate, tune, and hunt in real time. I wanna introduce Nick Dobbs, one of our senior analysts here at Cyberhaven, to dive in. And just a note, if you have any questions as Nick goes through those, please put them in the q and a, and have time for them at the end. Thanks, Amy. So let me transition us into the console. One moment. Alright. So what you're looking at right now is one of our Linea demo tenants, an environment that we use internally to test new features. From here, we'll start walking through several incident examples. One where linear triage is a normal policy incident, one where it finds something completely outside of policy, and one where it helps make a noisy policy smarter. We'll also cover some best practices, architecture theory, and a couple of upcoming features towards the end. My overall goal here is to give you a clear picture of how Linea agents support you and each other throughout your data protection journey. So right now, we are looking at the risks overview page. This is where I like to land in any new tenant to get my bearings. This view gives you a clean mapping of datasets to the policies that are monitoring them along with the volume of raw events under each. Think of it as a pulse check, a quick way to see which areas of your environment are busy and where your attention might be needed. You'll notice that some policies, which I've highlighted here for ease of viewing, have a little shield and star icon. That means let Linnea decide is enabled. Highlighted these. So for these policies, Linnea's analyst agent is going to automatically review each event behind the scenes and only promote the ones it deems high or critical risk to the incidents tab. It's basically policy refinement in real time. If you ever had one of these monitoring policies that floods your queue with routine matches, like in this case, every download to an endpoint, this is what keeps those from turning into a wall of meaningless alerts. Everything the AI suppresses still exists here as a raw event, so you never lose all the visibility. You just don't have to triage it unless you choose to. So, again, this page gives you a quick sense for where Linea is active across monitoring policies and which high volume flows might require some deeper, attention. Alright. So we're gonna now go into a couple of these incidents and take a look at what the analyst agent actually produces for you every time a policy is matched. Cool. So what we're looking at here is a policy incident that matched to our flows to unsanctioned cloud storage rule, from the customer data dataset, which triggered our linear analyst to build out the assessment that you're seeing on your screen. Before we dive in, it's worth calling out that what you're seeing is included in the advanced tier of cyber cyber havens license. Once you have it, every single policy violation in your environment automatically gets this same level of AI triage. So up top, we have the AI assessed severity, critical. Next, we have the general summary, which is your what happened here statement. It always calls out both the provenance and destination of the data. We will revisit how those tags are generated when we get into data labels. It's pretty cool stuff, though. In this case, our provenance is corporate employee records, and the destination is personal cloud storage and documents. Together, these tags anchor Linnea's severity judgment. In this example, you can see that the user Derek Udernoff uploaded a ZIP archive containing sensitive client data to a personal Gmail account. That's what pushed this to that critical level of severity. Next, we can see a data summary. This is where Linea explains how it reached its conclusion about the data's provenance and risk. Because this policy includes content inspection, Linea is able to combine what it found with the lineage context that you can see here, with the actual content of the file, plus some event metadata to reach this conclusion. And, again, since we have content inspection, we're able to view the full content, which you can see here on your screen. This is important because it allows you to sanity check the AI's reasoning, I e, you know, was this really PII or an edge case false positive based on some, you know, strange regex match. Underneath all that, we have the destination summary, which is going to, show you where that file was headed, be it a corporate system, personal cloud, or in this case a personal Gmail account. You'll also see whether the transfer succeeded or was blocked, again, depending on your policy configuration. One quick clarification here. Lydia itself is not performing enforcement. It can't block or warn on its own, and that is intentional. Lydia's job is to analyze and prioritize, not surprise your end users. We leave the blocking and pop ups to the policies that you manually create. Under all this is usage insights. This is where Linea is exposing the underlying large lineage model data that it used as part of this assessment. It generally will give you a one sentence snapshot of historical behavior. Like in this example, in the past ninety days, the company regularly moved data to drive.google.com, but not this user in particular. And we can click into that. So in general, what you get here is, transparency into the AI's reasoning, and it's going to include both information about the user itself, the department that the user is assigned to, and how the overall company, uses this destination. Again, this is gonna be informed by directory mappings, and we'll get to that in the architecture section. Yeah. So the next piece and final piece is the escalation suggestions. So clicking or so you get three options here for every, events. You can ask the user to explain the incident, ask the manager to review it, or notify HR. Clicking any of these will open up a pop up where Linea can draft for you a context appropriate message pulling from the same summary that you're observing. So if we do that real quick, give it a moment to generate. Cool. So you can take this summary and simply copy paste it into your email or ticketing system that you're using to contact the end user. So when you think about this from the analyst perspective, the summary is telling you the story. The provenance and destination are filling in for the ownership and direction. The data and destination summaries show the what and the where. Usage insights explain whether it's normal activity, and then the escalation actions allow you to remediate quickly. Cool. So now that we've seen how to validate Linnea's reasoning, let's use that context in practice. This is a filtered view of policy incidents where the, human policy severity is set to critical or high, but Linea has updated that severity in after its assessment to medium or lower. Now, you can also pretty easily see this in the reverse where you have lower human severity and higher, linear assessment severity. I would like to note also that what we're looking at is a demo environment. In production environments, we typically see a much higher proportion of severity, downgrades by liNear. So just keep that in mind as you're looking at these examples. Now kind of a general rule of thumb here is that when liNear is consistently downgrading a policy, it's a cue to narrow the scope of that policy, and when it upgrades the severity, it's probably a good idea to investigate that and perhaps tighten up your controls a little bit. So that's one way that you can start using liNear as a feedback loop and not just a filter. Next, we're gonna look at what happens when liNear detect gets involved and finds something without any policy involved at all. That's where it really starts to shine. So, again, this detection, came straight from, a nonconforming event given the context of the large lineage model, which is unique to each tenant. These live under Linea AI incidents in the incidents page. Just a quick note on this. Linea detect and let Linea decide do represent the next tier of the cyber haven license and are thus part of the enterprise package, so it's additive on top of the Linnea analyst. Now you might notice that each one of these incidents right here has a dataset and a policy with a little three star icon. This is showing you this is Linnea's way of showing you that it did its own classification on this event. And it's showing you, what, detection or what did what dataset, what policy it would recommend to cover this gap that it's identified. So, again, it's not enforcing rules. It's showing you where your blind spots are. Now something else to note here is that on the subject of the LLIM, from the moment that your environment goes live, Linea is going to start training on your tenant's model, with whatever lineage data it has available. So for new tenants, the model is gonna start learning right away, and you can expect to hit peak accuracy once you have about a hundred eighty days of data history. Of course, it's gonna surface meaningful insights before that point, but we like to leave that as kind of a safeguard. For existing tenants, if you have any historical data at all, it'll begin training immediately on full history of your environment. And then from there, it's going to retrain monthly, adding each new month's worth of data on top of its existing model. So in effect, it's an additive, enhancement. So the LLIM is going to learn what is normal for your organization, which users, apps, and destinations interact, and then flag movements that break those patterns. When it finds something nonconforming, however, it's not going to just alert you. It's going to feed, that event to the same Linea analyst agent that we observed earlier. That analyst agent will read the full lineage, classify the provenance and destination, generate a severity score, and a written summary. Then only if that severity, that winning analyst has generated crosses, higher critical thresholds does the event get promoted into, your incident queue. This is the reason that detect incidents can be lower in volume, but are almost always worth taking a look at. So this one's a great example. In this particular incident, we have the user Allison Felice uploading a text file named encrypted key dot TXT to a personal Google Drive account. Right away, the AI summary is calling out the essentials. We see provenance tagged as corporate credentials, and we see the destination tagged as personal cloud storage and documents. The AI severity here is again critical. And when we open the data summary, it becomes clear why. You can see here that we have a straight up, private key in plain text. So the destination summary is going to confirm that the upload went to a personal Gmail account, while the usage insights note that although the company as a whole often does send data to Google Drive, this user in particular never has. And as always, you've got the same escalation options below. Ask the user for the explanation, escalate to the manager, or notify HR. So it's this deviation, which is a factor of content plus context that gives Linea high confidence that this was a event of real risk and not just noise. And here's what I like to tell analysts as far as how to think about detect incidents. So regular policy incidents that we just looked at are gonna confirm what you already know. They validate your written rules. Detect incidents are going to show you what you didn't consider, movements between those rules, and that's great for two main use cases. The first being threat hunting. Detect is going to give you pre filtered leads that already carry good narrative context. The second is policy discovery, which is the process of turning repeat patterns surfaced by detect into new datasets or policy rules. And I like to also say that, you know, if you're only checking, even if you only check linear AI incidents once a month, start with anything marked high or critical on AI severity. And then from there, if you find something legitimate, that's a great opportunity to tighten up a policy or to find a new dataset or perhaps both. And over time, this will make your model much more efficient because as you increase the coverage of those policies and datasets and and make them specific, Linea Detect will have less surface area to cover and therefore give you more accurate insights into your blind spots. Okay. The third and final type of incident that we're gonna look at is AI plus policy incidents. These happen when a normal policy match, in set to monitor mode also gets an AI, severity assessment through let Linnea decide. These are interesting because Linnea is not finding anything new here. It's instead reevaluating something a rule already caught, but through the lens of full lineage and context. I would like I like to think of it like getting a second opinion from a really sharp analyst, one who's already seen the same pattern a 100 different times across your environment. When you enable let Linea decide on the monitoring policy, the AI doesn't just look at the file name or content that triggered it, although, of course, it will fall back to that if it lacks other context. Instead, it's looking at the entire data flow, who touched it, what it was about, I e provenance, where it went, destination, and then asks, is it really worth servicing? So in practice, it's rescoring existing policies based on context. This one is a great example of that. This one shows or this example shows the user, Tajvia Willis, downloading a file called entity risk intel from Google Drive down to a local file system. The policy that triggered here is downloads to endpoint, which is going to fire anytime that someone pulls data from any external location down to a workstation, I e quite noisy. But if you look at the AI summary up top, Linnea assigned a high AI severity while the base policy itself is set to informational. That delta matters. It's Linnea telling you, hey. This rule cost something real. In this case, our provenance tag is corporate security reports, and the destination is an unmanaged endpoint, meaning it's outside of company control. This combination is why when you bumped the severity, the rule itself is broad, but context of this specific match was determined risky. If we look at the usage insights, we can see that this user has never moved data to this unmanaged endpoint, looks like a mobile device, from the IP address there. So, if we consider the inverse, if this had been a managed endpoint, it's highly likely that liNear would have recognized it as expected and quietly suppressed it. This is what we mean by policy refinement in real time. The real power of let linear decide is that it allows you, it empowers you really, to take a high volume audit rule and turn it into a clean, SIM ready instant feed, while every raw match remains fully auditable within the Cyberhaven tenant if you need to review it. In this demo tenant for this policy downloads to endpoint alone, we saw roughly a 73% reduction in surfaced incidents, which is very much in line with the results that we get in real customer environments. That's what makes it such a powerful tool. Right? It gives you the capability to, again, keep your compliance trail intact, strip out noise before it hits your detection stack, and make sure that whatever does land in your downstream SIM or SOAR is already triaged and ranked by risk, not just raw logs. So, yeah, in short, it helps you, ensure that or does linear let Linea decide is doing the hard work quietly of connecting policy detection, AI context, and downstream automation potentially into one clean flow. Alright. One other thing that I wanna show you here is, again, that you have the ability to look at all of these AI plus policy incidents. You can see here 39 of these, for the downloads to endpoint, policy, and you can see 144, actual hits against it. So, you know, pretty good results. Now I wanna talk about data labels because this is where all of the context actually originates from. This is a new feature, for liNear, and it is forming the backbone for everything Alinea and even the broader DSPM platform that we're building towards. It's what lets us move beyond static keyword based classifications into something that is truly context aware, where the platform actually understands what your data is, where it came from, and more importantly, who it belongs to. There are a few different modes of classification that live under this umbrella, but they all share the same goal, make data labeling fast, flexible, and human readable. So let's start with data types. These are the classic, you know, what kind of document is this labels. Things that you can see here like legal contracts, customer feedback, code. But instead of having to build massive training sets or write complex regex rules, you can define them in plain English. In the case of code, for example, you can simply define it as software code, scripts, algorithms, code repositories, and that they must be actual code or photo of code. Now the model understands that naturally because it's backed by a large language model with broad world knowledge. It knows what code looks like, how it's structured, and what distinguishes it from something like a security report. And you can get you can get quite granular with this. You can specify specific languages, potentially even certain kinds of functions, stuff like that. So, again, the real power here is the semantic classification. And the best part is that once you define a, label, you can test it right here in the console. So if we open this test, button here and we upload some sample data, we can let that run for a moment. So what's cool about this is that it's designed to be quick, iterative, and you don't need to be a data scientist. Alright. In this case, we got classified as other data, which is interesting, but, it's a good way to validate whether or not you've written good labels. The next thing is that we wanna talk about data provenance. So this is where cyber haven is really gonna start differentiating itself because we're not just looking at what a document is, but where it came from and who it's about. It's what helps the AI reason about ownership and intent. Simple example of this would be, you know, every tax season, you've got employees that are going to send themselves their own w twos, usually from a work machine to their personal email. Traditional tools will simply see a financial document going to a personal destination and panic, flagging it as data exfiltration. But with provenance, Cyberhaven can distinguish that the w two belongs to that same employee, that the content of that w two is about that user, and that it's being handled by the rightful owner, therefore, it lets it go. However, if that same person were to send someone else's w two, then the providence won't match, and that's when it flags it as corporate data leaving the environment. That's the magic of context. Right? It brings real world reasoning into your classification, and we're just getting started. The next piece, that we're gonna roll out, it's not in this build, but it's coming soon, is automatic taxonomy, which will group data into thematic clusters, things like projects, case files, or initiatives automatically. And instead of asking analysts, to hunt, through hundreds of files to figure out what's important, Cyberhaven will simply surface it for you. So you can think of it as data discovery meets project awareness. Tying all of this together will be the also upcoming AI assistant. It'll be the interface layer that lets you create, test, and explore labels using natural language. The intent is to enable you to ask questions like, you know, show me all of the data labeled customer contracts that's left our environment this week, or even create a label for training datasets related to ML models. The goal is to turn complex data governance into a simple conversational process so everyone, not just power users, can benefit from the full platform. So, again, when you think about data labels, I want you to think of them as the foundation that powers everything else within Linea and the broader cyber haven ecosystem, the connective tissue, really, between content, context, and classification. They're what make context AI possible or context aware AI possible. Now the key to scaling risk detection without drowning in endless manual rules. Alright. Now let's, switch gears a little bit. I wanna talk more broadly about some architecture best practices because, you know, we get this question a lot, which is, can I simply flip on Lydia everywhere? And the short answer to that is, yes. You can. But I would argue that the smarter move is gonna be some kind of phase rollout. So here's what I would recommend that you do. First step is to ensure that you're feeding Linea clean data. The an the analysis it does is only as good as the signals it can observe, and that means keeping both lineage and directory data healthy. On the lineage side, you need to make sure that your endpoint agents, browser extensions, and cloud connectors are all up to date. If any of them go dark, you might start seeing broken event chains, and that'll limit the accuracy of which or that which when you can reconstruct events. The other half of the question or the equation is directory context. When you use as your identity provider, typically Microsoft Entre or the equivalent for you guys, to understand user relationships and org structures. That context is what's going to power the activity insights and usage insights that you see within incidents. Things like we saw, right, where this department has never sent data to this domain. Right? Now without that directory link, those insights will still appear, but they'll be far less informative. You might be lacking peer group comparisons for one. So think of it as feeding both halves of lineage brain. Right? The lineage data will show what happens. Directory data will explain who who it was and how unusual it was for that identity. When both are healthy, lineage summaries are sharper, and its findings are easier to act upon. The next step is fairly typical, but we recommend that you start with policies that protected crown jewel data. And those are typically gonna be worn or block policies that you already know are gonna carry real business impact. From there, Linea's automated analyst will automatically step in, with those existing policies to evaluate them, summarize them, and assign severity. You'll see fast payoff there and speed up triage. At the same time, we recommend enabling let Linea decide on your broader monitor policies. It's a quick way to see how much noise Linea can actually filter out for you, and it'll clean up your queue, downstream. And while all that's happening, the detect agent would run-in parallel, quietly learning, your normal data flows and surfacing nonconforming movements that aren't tied to policy. So it watches your back even as you're fine tuning. Thirdly, you need to consider integrations downstream. So when you connect Cyberhaven, to something like Splunk, Chronicle, Cortex, right, any of these SIM or SOAR platforms, you'll want to account for how incidents are gonna flow out of the platform. So that's going to be over here. Integrations. There we go. Alright. So couple things here. You come into settings. You want to look at some integrations. Let's say times in this case, and we'll select incidents because that's what you're gonna want to export. And you'll need to include AI summaries. We need to check that box and also subscribe to incident updates. With these settings enabled, you'll get, initial log as soon as the policy triggers and creates an incident, and a second one a few minutes later after Linea finishes its analysis and adds the summary. That way, you can ensure that you're getting fully enriched events downstream, but you do need to account for the fact that you'll get two events per incident. Additionally, starting with version platform version 25 dot o nine dot o one, Cyberhaven introduced a bidirectional incident API that closes the loop between Cyberhaven and external tools. You can see that right here. Again, if you come into administration, look at API specification. This will depend on your Cyberhaven tenant version. So if you don't see this, you would need to upgrade your tenant. But, what it does is allow your, you know, SOAR or SIM or, ticketing system to change, incident status directly within the cyber haven console using a simple patch call. You can do things like update the status, the close reason, add notes to it, sign ownership, and some other things. This is how you're gonna keep Cyberhaven and your external systems in sync. Right? Eliminate duplicate queues, remove stale open incidents hanging out in the console. And when it's all connected, it'll make your workflow feel much more seamless. Right? And have the effect downstream of ensuring that your analysts are spending less time bouncing between tools and more time remediating. Step four is to use exclusions intentionally. Right? And we'll get to that right now. This is the linea exclusions page. Right? So this is where you decide what linear should ignore outright, and it's really the governance side of the house here. So these this, exclusion, engine, this policy engine is the same thing as what you would build datasets and policies with. So for those of you familiar, you can get quite granular with it. And, you know, most teams in the wild are using it to exclude trusted partners or subsidiaries and or maybe honor privacy, or compliance carve outs where AI review simply isn't appropriate. Think of it as your tuning layer. Right? If you're seeing some noise from Linea, you can come in here and exclude whatever it's looking at that you know to be a false positive. And one final thing on this, exclusions do apply globally to Linea Detect and automated analyst. So if you exclude something here, neither agent will analyze it even if it does generate a valid policy match. So do be intentional because you're defining the edges of what the AI system can see. Alright. Before we wrap up the hands on part, I wanna zoom out even further and really just kind of close with how to think about linear day to day because, you know, we see it as valuable in two very different ways. Tactically, think about liNear as a time multiplier for your team. Right? It's there to speed up triage and investigation. It's there to highlight unknown unknowns, and it filters out noise from broad audit or monitor policies with let Linea decide. On the strategic layer, Linea becomes your feedback loop. Right? It shows you where your policies and datasets might need tuning. When you compare human defined policy severity to AI assigned severity, it helps you discover new datasets and or policies and close coverage gaps based on what the tech is surfacing. It gives your SIEM and SOAR richer context, so alerts upstream already come with a story, and it helps you maintain a healthy governance model, clear policies, intentional exclusions, and clean lineage. So put simply, Linea isn't there to guess. Right? It's there to maintain a living map of how your data behaves and help you separate what's truly unusual from business as usual. And that's the quick tour of liNear in action. Yep. So I guess, you know, final closing thoughts. Just, keep in mind that liNear is improving in accuracy month over month. And, again, you get better outcomes as you tighten up what it sees and can't see and how your datasets and policies are configured. And if you haven't already, I recommend that you talk to your CSM about enabling liNear on a trial basis within your tenant. Now, we'd like to open things up for some questions. Thanks, Nick. There are a few questions in the chat, and so I'm just gonna read you the first one here. So, does Linea automatically generate incidents for block policies? So it depends. But yes. Right? It's it's going to generate incidents, full block policies if you have linear analyst. Right? So any policy match at that point. K. Two quick questions here. One, you mentioned keeping agents up to date. Will it work if you're running on n one or n two? Yes. The second part. Yes. It should. I believe the endpoint agent itself won't affect whether or not liNear is working. Right? But it will potentially affect the quality of the event data that liNear is, you know, doing its analysis based off of. So the more up to date your endpoint sensors and just really all of your sensors are across the board, the more reliably we can expect Linea to perform. Great. And the second part of that question is, you mentioned an AI chat interface. Can this be used to help create and implement protection policies? That's a good question. I'll have to go back and talk with product about that, but I believe that's the intention. I just don't want to, pigeonhole them. Okay. We have someone in product that might be able to help us too, so we'll we'll circle back on that question. And then, also, how do we define what is considered internal data? Gotcha. Yeah. So that's where the provenance tags are gonna come in. And you will be able to well, you should be able to, find pieces of internal data and run tests on them, in the data labels tab within your tenant and determine whether or not those are gonna be flagged as personal. Linear does a pretty good job of it out of the box. So, really, it shouldn't be a huge factor as long as it has the ability to inspect the content. Great. How are you measuring improvement over time? Yeah. So I think that's a great question, and I would like to go back to product and get some very specific details on that. But in short, we do have some usage metrics and stats that are trapped on the back end. And, yeah, but that is a little bit above my pay grade. If we enable let linear decide, can we lose visibility on our lower severity events? No. You'll still retain full auditability on all events that let Linea decide is observing. The main difference is that instead of receiving every so okay. Let me back up a little bit. We see a lot, in production environments where customers will enable a monitoring policy to always generate incidents, and they do this because they want to export all of these audit events into a downstream SIM or SOAR platform and retain the audit record there. With let Linea decide, you are instead I mean, think about it a little bit differently. Right? Instead of simply sending every auditable event of that class, you're only sending now the ones that, you know, a human might actually want to look at. And so you're still retaining them within your console, and you can go into the risks overview page, navigate to that policy, and see those events there just fine. Right? So you're not losing data by enabling let linear decide. You're really just pairing down that total volume into something much more manageable for your on the grounds, you know, analyst team. Great. There's one question in here. Are the best practices documented or will this training be available to watch later? Yes. We are recording, and we will provide that link for you after the call. There's another question in here. Looks like some of you might be missing your data labels tab. So we will, find out who your CSM is, and and we'll have them reach out to you to make sure that that, we can figure that out for you. And then there's another question. Linear classification, assume it can scan contact content, excuse me, and then try to classify that type of document or the type of document, I should say. Yep. It can do that. I mean, it has a number of capabilities that I didn't mention explicitly, including things like optical character recognition. So it'll actually do a very good job of taking out, even just like screenshots. Right? I saw a couple of examples even within this demo tenant where, you know, you had a very small part of window that was screenshotted that contained a one time, passcode, and was able to pick that out and call it out, within the summary. So, yes, it is looking at content wherever possible, but it does depend on, in the case of, like, policy match, whether or not you have content inspection enabled. So there are there are some caveats there. But in general, if you can see the content, Linea can also see it, and it will base its assessment off of that or or at least factor it in. And I think this might kind of answers that question as well, but or this question as well. But, one question, can we pass in multiple files to train liNear for a new data type? For example, recognize format of an Excel spreadsheet for the new label. It's a great question. So you can totally run tests on sample files, through the data labels tab. You would wanna make sure that you, of course, define the label that you want, beforehand. And, yeah, test it there. If it's not matching, then refactor that, label description that you've given it. This is a feature that I think will see enhancement as time moves forward. Again, this is kind of the early days of data labels. So, we encourage you to submit feedback, wherever possible and, you know, help us help you. Okay. And I we might have covered this a little bit, but just, last question here that I haven't answered yet or you haven't asked yet is, can Linea be configured to generate incidents solely on unmatched traffic? Yes. So that is going to be when you detect. Right? And so when you detect is going to look explicitly at events that are not matched by a policy. Right? And only at those kinds of events. So, yeah, that that is that is baked in, but it does require the enterprise grade license. Great. So those are our questions so far in the chat. If you have additional questions, please let us know. Just to kinda close things out here, a couple quick announcements. So, as a reminder, we're launching our DSPM beta program. So please mark your calendars and attend our DSPM early access program announcement, which will be on November 4. And if you're interested in hearing more about that program, I believe we're gonna be sharing a link, where you can click to sign up or you can reach out to your CSM to get added as well. There are, limited spots available, so act quickly if you're interested. Additionally, we're gonna have another cyber haven unlocked on November 19 to discuss security for AI. In just the first twenty four hours of chat GBT Atlas' release, over 500 endpoints for 57 of our customers saw traffic to it. And with the AI landscape ever evolving at lightning speed, it's really important topic right now. So please join us to hear Cyberhaven approach to security for AI. And finally, you'll receive a survey after the call where you can suggest additional topics you'd like to hear more about in our unlocked series. So please provide your feedback for us so we can make this series as valuable as possible for you. Thank you again for joining. And if you wanna see your first thirty days with Linea mapped out, your customer success manager can get that rolling for you. Thank you guys so much. Have a great rest of your day, everyone. Yeah. Have a great day, guys.