Video: Cyberhaven Unlocked: Using MCP & AI Skills to Optimize Your Cyberhaven Environment | Duration: 3560s | Summary: Cyberhaven Unlocked: Using MCP & AI Skills to Optimize Your Cyberhaven Environment
Transcript for "Cyberhaven Unlocked: Using MCP & AI Skills to Optimize Your Cyberhaven Environment":
Alright. Hello, everyone. Good morning. My name is Dave Stewart. I'm on the product team here at CyberHaven. I'm really excited to, give an overview and a demo today of some of the great work we've been doing to release, our analyst plug in, and that comes with, an MCP server that can make it really easy to interact with, CyberHaven data as part of, you know, workflows that most likely are in the tools you're already working in right now, either Cloud Code or Codex. So we'll go ahead and get started here, and then, my colleague Nick will be joining me as well here in a second, to to add on some additional use cases to highlight. But first, I wanna start with kind of a high level overview of what this, plug in is, what it contains. I'll walk through kind of a live demo of, some of the interesting workflows that you can do with it. Please, you know, I'll be asking questions here. We have other folks online to help support, answer. We'll leave time at the end, for q and a here as well. But really wanna make sure you get the most out of this and, you know, really get excited about using the plugin yourself, as soon as the session's over. So with that, I will go ahead and get started. Let me share my slides. Alright. I think this is coming through. So high level, just a couple slides here. Then really, I think, it'll be really, you know, most useful to see this, in practice. And, again, I'll show you a demo here, with cloud code. But this is our analyst plug in, right, which enables AI powered security operations either within the Cloud Code environment or the codex environment. It equally, works in both. I use both. Slight preference for Cloud Code, but, certainly, if you're a codex user, an OpenAI user, you can do all these great capabilities in there as well. We have a number of skills that we've been developing over the past few weeks and months, that are baked into this plug in. I'll give you some examples of how those, kind of out of the box skills work and what they can do, with the number of out of the box agents that we ship with, but then also show that a real power here is the the flexibility that these these platforms offer, so that you're not just limited to what we shipped out of the box. Right? You can, you know, ask your own questions and launch your own investigations and, you know, it, you know, it can help create those kind of, you know, custom curated workflows, for yourself. So, again, few slides, high level. Right? Why are we doing this? Why introduce this kind of, use case at the conversational layer? Right? Certainly, you know, all of you, are probably familiar working with CyberRaven consoles. PowerFalls is all the key you know, a lot of capabilities in there already. But often, right, you know, you want a really quick, you know, path to an answer, and you may already be, you know, doing workflows within within these agenda tools like Cloud Code or Codex. And so can you and how could you, right, kind of, you know, you do your cyber haven workflows directly within those environments by just kind of asking, you know, you know, and kind of following your thinking, instead of navigating menus within our console. And so, you know, this could be workflows like, alright, you know, I'm coming in for the, you know, first day this week. I'm looking at over the incidents over the last week. How do I, you know, prioritize and and review those quickly? Right? Or how do I get, an executive summary or do a posture check or do some trend analysis over a longer period of time, without necessarily needing to build a custom report. Right? And so using these Agenta tools like Cloud Code and codex with the plug in and the MCP server that it ships with, we're gonna hopefully enable you and empower you, to do some of these capabilities yourself, which is pretty exciting. And, really, it's just about, you know, having this conversation in these various different platforms. Again, I'll show you the kinda out of the box experiences that we have, and some of the workflows that we've already built in here, to really help you, you know, either triage your workflow, do these posture checks, do tuning of your datasets and policies and, you know, a whole, you know, host of other, you know, workflows within the platform. I mentioned we shipped, you know, we have dozens of workflows in there already. So, you can think of a skill as, kind of a natural language workflow. One of the things that's so powerful about, this platform is, you know, these 52 skills that we ship with, they're all just markdown files. Right? They're all written in natural language. They're the instructions for, you know, the cloud code or codex, to follow the workflow that, you know, was designed and defined within that skill. And this is something that, again, you can review yourself. Right? You can you can actually review the the the markdown, the natural language. You can have a conversation with Cloud Code or codex and ask it like, oh, what's what's exactly is it doing here? It can describe that for you. And, again, you'll see they fall into, you know, almost the whole, you know, suite of of use cases and capabilities that you'd want to do within our platform. So whether you're, you know, doing incident analysis or, you know, threat and, you know, insider risk for users or compliance around DSPM or policy tuning, reporting, or all the all these host of things. Right? A number of the skills that we have shipped already, can can likely, you know, you know, cover these these initial workflows that you wanna do. And, again, like, the the big power here is that this is just the starting line. Right? So these 52 skills, are covering very common use cases that we experience ourselves, right, in in in analyzing, you know, data with CyberHaven. We'd expect, you know, a lot of overlap with some of the same use cases that you might have, but you're not limited to just that. Right? And so if you'll I'll show you today, an example where, you know, again, after you do, you know, these kind of, you know, starting kind of out of box experiences, you can still continue a conversation, you know, ask whatever question you want, and these agentic tools, right, Cloud Code or codex can use the capabilities that that we're delivering the plug in, which includes an MCP server, which gives you access to all the data from the cyber hidden public APIs as well as these skills as kind of, reference material and and and, you know, allowing Claude and Codex to see kind of these, you know, common patterns and these kind of, you know, curated best practice patterns that we built into these skills so that it can create something entirely new and bespoke, to to to your workflow. So, again, really excited about what we're what we'll be delivering as far as out of the box experience. But, really, that's just the starting line and the potential here is limitless, and we're really excited to, partner with you and kinda hear about some of the interesting use cases that you yourself, might develop with this. But I think with this, it's just better to kind of dive right in and and show some examples here. I'll do my best, right, to just kind of navigate, you know, around, you know, the time that, you know, some of these injected tools might take to give answers. I have some of these results already queued up, but I can still try my best to, you know, give you an interactive demo here. So I'll start with the cloud code section. I already have, some results previously, so I'm just asking it to reset my demo. And I'll show you in a second what that reset means. But, again, if you're in you know, if you're a Cloud Code user or if you're a codec user, the plug in will work equally. There's just, you know, there's there's slightly different install instructions in different environments and and make that available, in our support portal so you can see, how to do that. But, again, I'm already in a conversation here. I've already have some results that I can I'll go back and show those results here in a second, but let me just show you, you know, how to get started. And we already have I mean, again, we've developed a number of these different skills. You can see there's a cyber haven start skill here that you can just call, as a skill, or I can just even just say, let's get started. Right? And one of the first things that it should do, the reason why we sent my demo, is it should ask me here, right, what type of user am I? Right? Because, again, with these 52, you know, out of the box, skills here, there's, you know, large number of workflows that may be better suited for, you know, SOC analysts or better suited for other type of users. And so sure enough, we built this little onboarding experience, that that you can use to kinda get started. So it says, alright. What describes my role? You can see the option to pick here. I can still type, you know, again, have a conversation, type my own answer. I'm gonna pick SOC analyst though to get started. And then based on that, right, it'll go through and it'll say, alright. You know, what are, you know, potential next steps that, that might make sense? It's saving my role in there. That's why I had to do that reset earlier because it already knew I was a SOC analyst. But I can go in today and so couple different options. Right? I can say let's clean up duplicate incidents. Let's review, you know, today's incidents. Let's work the open backlog, investigate a user, etcetera, or again, just to find any kind of natural language describe what I wanna do. Let's say I wanna clean up duplicate incidents. Right? And so it's gonna ahead and and kind of start off that workflow. It should ask me more questions. Right? What time frame do I wanna look at? Let's look at the last seven days. I believe there's another question for the scale. So different scales may have different, you know, questions that they ask you, but, again, just takes place in a in a simple conversation here. So we'll see here. I think this is actually launching it. So while it's running, just kinda quick background. Right? The plug in right now ships with, again, these 52 out of the box skills, these 20 or so agents, as well as an MCP server. Right? So that MCP server will be running locally, on your endpoint to connect to the Cyber Haven APIs In our release coming out in June or the the release that that will be, built in June, may come out in early in July for folks, that MCP server will just be part of your console. So it'll be, you know, mcpdot, you know, customername.cyberhaven.io. Right? And so they'll they'll that that server will be just remote. You'll just connect to it and authenticate, as if you have these tools and you connect to any other, you know, connector. They'll They'll just navigate you over on the web. You'll hit, you know, authenticate, and it'll bounce you right back to cloud code, and you'll be off and running. But if you wanna get started right now, and don't wanna wait for that release, right, you absolutely can. The plug in, again, the zip file will ship with that MCP server. It'll just run locally for each user, and then connect and authenticate through our API. It will require you to create an API key. That experience is part of the onboarding experience. I've already done it, but, it'll also just launch you right in. It'll navigate you over to the API creation page, ask you to create a key, and then there'll be a window, a pop up, either for Mac within the, the the credential, you know, store or key chain or in, Windows, the the equivalent, right, for you to copy paste and store that key. So, again, little bit of experience if you're you wanna get up and running right now with the local MCP server. Otherwise, you know, in that late, you know, June or the July release, we will have the, the MCP server running remote. So it's doing the query here. It's looking at the last, you know, seven days. This is a duplicate incident skill, and you'll see here, it's it's referencing that's doing duplicate detection across three methods. This skill and, again, I I could have read the, you know, markdown file or I could have been asked, you know, Claude, you know, how is it that we're determining duplicates and would have answered it for me. That the the three different methods it's looking at here is, was it the, you know, same file name going to the same destination by the same users, got classic? Was it a hash match where maybe it was, the same hash, but it's been renamed and it's been going to, you know, different in the same destination as a, as a potential duplicate? Or was it this, like, destination Birch, right, where we saw the same file going to multiple different destinations by the same user? We again, you know, we it'll show you the options. It'll show you the kind of summary here of, you know, who are the users, how many incidents, we saw, that were duplicate going to the same destination either by classic method or this burst method and which ones it's recommending to keep. Because this workflow, one of the things that I can do, and you can see the next steps here asking me what, what it wants to do. I'll just hit cancel, is one of the options was it could have closed out these duplicate incidents for us. Right? It could have said, alright. I'm only gonna keep, you know, one kinda canonical incident for each group. I'll use the API to close out the other incidents. It'll include a description of why it closes other incidents with a link back using the UID to the to the canonical incident. But, again, just hopefully helps you manage your workflow a little bit better and just gonna give you example of some of the automations and the workflows, that it can do here. I'm gonna just say I'm done for now. So I'll go back and show some previous, you know, things again, but so we don't have to wait for the the thing to run. But one of the first things that, it it asked me before is, you know, reviewing my recent queue of incidents. Right? So, again, I picked a SOC analyst in this first demo yesterday, and it said, alright. You know, let's let's review my, you know, open incident backlog. And I picked the last seven days, and sure enough, it, you know, it it ran the ran the review of all the incidents, about 100 incidents across 17 users. It gave me a high level summary here, but also produced a report. Right? And so I can go and show you that report here if I can share the right screen. We'll do here we go. Right. So, hopefully, right, if I'm got the screen right, you're seeing, the open incident backlog here where, again, it creates this, you know, natural language report for you, gives you an executive summary, gives you some statistics, give you some idea of, again, what was a 100 incidents over the last month. Let's say I'm just, you know, signing in for the first day today, wanna quickly review that, gives me some indications of, you know, what it believes is the highest possible risks, and and what, you know, should be investigated first here with, you know, additional details here about, you know, what it's recommending as a pre a priority order to kind of triage these incidents between users that should be investigated immediately or other kind of lower priority ones, for us to consider, as well as giving us, you know, that same thing you saw with that duplicate workflow that, kind of both close, candidate options here where, again, it might find, you know, duplicates. It may find things that, you know, it might recommend we close even before going in and doing a manual review. And that workflow, when it ran, I'll show you here. So it, again, produced the report. You can see here the link to the report. Right? You can open that up in your browser. It gives you kind of, you know, quick summary here in the conversation. And one of the things that it it asked me for the next step and, you know, you know, had that nice little UI before. This is just a result of it is, what do I wanna do next? Right? And I can I can take his recommendations to do the bulk close? I can do that duplicate cleanup workflow. I can one of the options was investigate the user that it thinks is the kinda highest priority risk, you know, Tavia. You know, this is the demo environment. So some of our sales engineers, you know, it's recommending to, to investigate that user. And so I said, yeah. Let's investigate that user in-depth. Right? So now it's doing this kind of user investigation workflow over the last seven days, find all their incidents, look at some of their insider risk stuff as well, you know, gives us this kind of, you know, headline finding. In this demo environment, you know, our our, you know, sales engineers, you know, do a lot of activities that would demonstrate high risk. So, you know, it's, you know, saying here that, hey. This looks really suspicious. You know, the trade crap that they're doing of, you know, renaming files, you know, and ex filling them to their, you know, personal, you know, their personal destinations, you know, channel shopping between sending it to chat GPT and then Copilot and Gemini all within a short period of time. All of these things clearly would, you know, in in a normal context represent a, you know, really higher critical risk, you know, that that would be worth investigating. So this skill, this kind of workflow highlights this for us, again, creates a report, that we can we can view in our browser and then, you know, gives you the ability to, again, continue that conversation of, you know, well, what would I wanna do next? Right? And so at this point here, you know, I kinda started on that SOC workflow, and I I I did my triage of, the, you know, past seven days of incidents. I found a user that was risky, investigated that user. I could have kept going down, continuing the conversation, specifically for, that type of workflow. But I just wanna highlight here, you know, what else is some of the things that I can do with the plug in. As I mentioned, you know, 50, you know, plus, you know, skills in there that that we've built. And you can see here a number of different ones for kind of that, you know, that that SOC, you know, triage use case or the user investigation use case or, you know, enforcement and policy use case, posture, DSPM, operational health reporting and briefings. You wanna create an executive briefing, briefing your compliance posture or etcetera. Right? And then, you know, it's still keeping it in context of this conversation that said, hey. Of all these things, like, this is potentially some of the next steps that, you know, you may wanna do for Tavia. But, again, I mentioned that these skills are are natural language. And so, you know, when you have the the the zip file, right, you'll see the markdown files if you wanna open up and read them. But you can also just ask, you know, in the conversation directly. Right? You know, I saw deployment health as being an interesting, option for a skill. What does it do exactly? Right? I can just ask that question, and it gives you an example of this is what the the, deployment health skill does. And this is, you know, again, in its, you know, analysis of reading that natural language markdown file, you know, it's giving us the answer here of of the kind of workflow that it takes, the outputs. You know, here's the, you know, what you get back. Here's the report. Right? Made in made in the HTML so you can view it in your browser with, you know, all these different, details that it includes. Right? And, again, kept continuing the conversation. It's asking me if I wanna run it. But this is where I think it really gets exciting kind of that this next phase of I I can take these out of the box things. Again, we've created these because they're common. We see them in our own use case. We're hearing them a lot from, you know, a a variety of different customers. So we're building this kind of out of the box experience to make it easy to get started with what is likely the, you know, 80%, you know, 75% of common things that you all wanna do. But, you know, every every customer is gonna have, you know, unique use cases and unique questions they wanna ask. The power of this plug in is that, you know, you're not limited to just the out of the box, out of the box skills. And so just as an example here, I said, actually, you know, one thing I'm really interested in is, you know, users who are violating policies on the weekends. Right? And so I asked it, hey. Over the last thirty days, what users in my environments are the riskiest based on their Sunday policy violations. Right? And being a chatbot, of course, it agrees that what I said was a great idea, and it ran off, to do this workflow to say, alright. Over the last thirty days, right, you know, who's doing activities, on Sundays? And it notes correctly. There's no out of box skill for this just yet. Right? But having access to, you know, the MCP server and the tools that we make available to query data through our public APIs and then having access to the context of the other skills we have in there gives it kind of the guidance that it needs to kind of create this, you know, customized, you know, bespoke, workflow for us. And so it it pulls the last thirty days. It finds the Sundays. It pulls each Sundays for the analysis. In this case, right, we only had, you know, you know, two users that were doing stuff on Sundays. Right? And so it said Cole here is is our riskiest user because of the, you know, the work he was doing on a Sunday, right, sending corporate financials to his personal teammate. He's in the flight risk and the deploy departing employees, inside our risk group. Again, these are all rescues. It's our demo environment. Right? And so, you know, on the surface, looks like, right, very suspicious activities to be doing on a Sunday, sending out financial documents to a personal Gmail for someone that is a flight risk and priority employee. So sure enough, it's like, yep. Here are the critical evidence. Right? This is, you know, what, you know, what I think, we should do here. But the power again, right, is that I was able to kinda ask this question that we didn't have an out of the box skill for, and it uses the tools. It uses the context of all the out of the box skills, to kind of, you know, create a a nice kind of curated and bespoke experience for us. Now all of the same, you know, I mean, here is you see all the different out of the box skills that we have. They all have, you know, descriptions here. You can invoke any one of them at any time, by just doing the slash command to invoke it or just, you know, if you just say what you want to do, it will identify if there's an existing out of the box skill and we'll run that skill for you. Or like this case, right, you know, just do the workflow itself. But we have, you know, a whole number of these out of the box. Really excited for folks here that are interested in kinda getting started and getting up and running, to use our out of the box skills. But going further than that, right, we'd love to see how you'd like to kind of innovate on your own, create on your own, improve. Right? Maybe our sales and kind of extend them further. And this plugin and this platform gives you all the capabilities to do that. And so with next thing I wanna show is, you know, pass it over to my colleague Nick here because I'm showing you workflows that are kind of entirely within the CyberHaven realm, right, where, you know, even if it's custom workflow, we're still only using CyberHaven data. But the power of this plug in, right, the power of operating inside this framework is that it can access, you know, with through other kind of MCP connectors, additional tools in your security suite. And so you don't have to be limited to just a, you know, Cyber Haven exclusive or Cyber Haven only workflow. Nick here will show a demo of how this plug in and these skills can impact a broader, you know, workflow across multiple different tools. So pass it over to you, Nick. Alright. Awesome. And thank you for the introduction there, Dave. So quick good morning to everybody. Hope you're having a great one so far. So some quick context on what you're about to see, and then we'll get right into the demo. I want most of our time to be focused on practical stuff. So we'll start with super common scenario in pretty much every stock on the planet. Alright? So let's imagine that you're a tier two analyst at the start of your day. Right? A new high set of ticket decides to slide in the queue. Someone from finance just reported seeing a weird falcon pop up on their endpoint. They're not sure if they clicked anything. You have their asset name, which is Nicholas Daub d c o 6. You have about 20 open tickets behind this one, and your manager is gonna ask what happened on the endpoint in about seven minutes. Alright? So typical workflow is you might open Falcon, search the host, look for any detections, and then pivot maybe over to CyberHaven where you'd search for the same metadata, look at incidents, then you're gonna mentally correlate all the time stamps, incident identity data, copy all the fields between tools, and write up what you find. If you're good, that could be ten or fifteen minutes. If you're new or find much of anything interesting to investigate, could take you up to forty. Right? So what we're gonna do here is type one very straightforward question into a chat window and let our LLM here do the cross tool correlation. The endpoint, that we're targeting is real. The incidents are all real, and what you're about to watch happen is, well, also real. The only thing that was staged is the activity itself. Obviously, we had to set some things up, to make it interesting. So let's get to it. So you should see, my cloud code session up here on the share. Alright. So we get the prompt here. Alright. So there's been a phishing report from finance. Investigate Nicholas Dobb d c o six, and tell me what happened over the last seventy two hours. So we're gonna let that run. Now as it's running, pay attention to the tool calls that are occurring on the screen. What you're seeing, once it yeah. There we go. What you're seeing is Quad doing two things in parallel. It's calling the cyber haven analyst plug in to fetch incidents grouped by user for this host, and it's also calling the CrowdStrike MCP to search Falcon detections. Both are using the same filters and both over the same time window. Alright. Go. So, quick note on on it while it's rendering. As Dave alluded to, the LOM is not making decisions about various APIs to hit. Those are all tools that various plug ins and MCP exposes, with pre structured schemas for input and output. The LLM is simply choosing, when to call them and how to combine those results. That distinction does matter, and we'll come back to it. In the meantime, let's wait for this to render. It can take up to you about a minute to work itself through. Alright. There we go. So, the top of the report, pretty standard ticket header. We have, you know, investigation ID, quick, you know, what triggered it, time window, asset, user context, telemetry sources, and findings. Alright? So and we'll let that continue to load below us. We're gonna start with this f zero zero one, kind of, like, contextual, incident first. So this is a high severity, status new contained incident. In this case, Falcon caught a phishing delivered HTA file, which is an HTML application that tried to spawn, an obfuscated PowerShell payload via MSHTA dot EXE. You know, we have our MITRE technique here, ten fifty nine dot zero zero one, and we see that the activity was blocked by CrowdStrike in this particular case, which is what we'd like to see. So we know the attack got nowhere on the endpoint in this particular case. The process tree information is all gonna be here in the body. We can see that explorer spawned MSHTA into PowerShell. The PowerShell tried to run a hidden, command, which was, a download cradle, for, you know, potentially a malware payload being staged elsewhere. Now this is where it starts to get more interesting. So, and this is where this is the thing that wouldn't exist as part of this analysis without using multiple tools. So Falcon's detection record tells us what actually executed, but it doesn't tell you what was fetched. The detection fires on the spawn of PowerShell from MSHTA, but doesn't give you information about the URL that the HTA was downloaded from. Cyber Havens incident, gives you details primarily on the delivery chain. So we can see here that the policy M5090, the threat vectors policy, has the full lineage chain associated with it, And we can see that at approximately twenty forty three fifty four UTC, Chrome downloaded q two invoice HTA from a specific Google Drive URL. That drive object ID, is right here. And we can see also that the page title metadata that Cyber Haven captured was Google Drive, infected, infected file, which is, the page title, for that particular download page. File landed on our Mac shared downloads folder, which CyberHaven classifies as an external network share. Five seconds later, the user opened it, and one second after that, Falcon blocked the PowerShell spawn. And so what we're seeing here is two tools, with two different time stamps looking at the same instant. And, again, the the real insight is that you can, you know, see both what's actually happening from the threat perspective on the endpoint for your EDR, but also get the semantic context from Cyber Haven, which is tracking the value of the ideas contained by that data that might be at risk. Right. So alright. So we talked about the fact that we get the actual drive, download link. So you might be asking yourself, if you're familiar with Falcon. Right? Doesn't Falcon already see that data? Why don't we just pull it from there? And the answer is that while Falcon has visibility into network connections at the EDR layer where you can get things like the resolved IPs, the byte counts received and sent, to get them for this particular incident, you would likely need to pivot into Falcon Insight or the next generation SIM and go ahead and query network events for that agent ID on the right times or in the right time frame. They're not natively part of the alert, and even then, they're gonna be network layer artifacts for the most part, not the specific drive ID, not the application level URL. Cyber Haven captures that content, because we instrument at the at the browser at the page layer, not the network. And Nick, sorry to interrupt you real quick. Can you just increase the the font size of cloud code? I think it's just, you know, command plus, like, it zooms in. Sorry. I just realized. I I I didn't see the chat there. A lot of people were seeing the screens too small to read. Is that better? Maybe you can do one or two more. Yeah. That helps. Thank you. Alright. Cool. Alright. I'll try and, keep this keep this moving as we, scroll through it. I think we were down about here. Okay. So where were we? Okay. So, again, security is a game of defense in-depth, and the real purpose here is to show you the kind of value that you can get by combining the outputs of multiple tools that are looking at different slices of the same picture from a security perspective. One last note here for F001. We at the bottom, there should be recommended actions. Here we go. So, in recommended actions, one of the recommendations is to go ahead and hunt for that Google Drive, you know, object ID across the fleet. We have some good recommendations as far as user outreach, and then we also have IOCs down at the bottom of this report that we can plug into our various tool sets. Alright. Let's take on F002 now. So this is a lower severity, incident completely unrelated to the phishing activity, but on the same endpoint earlier in the day at approximately should be, 06:53 UTC. So about fourteen hours beforehand. This is an instance of the same user, ndobs, copying 645 bytes of Rust source code from a file called o auth rs. This is, you know, in this case, dummy token management logic from a repo called Acme payments platform pasted into unauthenticated chat GPT. Yeah. Here we go. So, here we can see that the CyberHaven Linea AI summary identifies the content correctly as corporate source code. Four content inspection policies in our tenant fired simultaneously on the snippet, most notably the IP and confidentiality policy. Falcon, of course, is silent on this front, and it's correctly silent because it's not malware. Right? It's not a threat technique. There's no anomalous process to find, no IOCs, no attack chain. It's simply data leaving the trust boundary in the most modern way that a developer can make it leap nowadays, pasted right into a public LLN. Alright. So, again, this, I think, is a good example of why you might want to have this cross tool comparison because, you know, even knowing if if all you saw is that the HTA traffic was blocked, right, you might assume that there's nothing further to investigate. But having the business context tied to the actual, well, the actual endpoint security picture gives you a much better perspective on, you know, what level of risk is truly occurring within your environment at any given time. Alright. Let's find there we go. Looking at assessment review. Okay. Alright. So I do wanna spend a quick minute on the AI triage layer here, because it's nuanced, and I think you should I think we should talk through it. So we've done some previous, content around liNear, specifically that you may recall. LiNear does have multiple analysis surfaces, and that matters because oftentimes these analysis surfaces are looking at different slices or time frames of your, user activity. So, we can see here that in this particular incidence case, the linear analyst agent downgraded the incident from, low severity to informational. The policy defaults to low, by design. Now the grading here is weighted on things like payload size, single event versus burst, things like working hours, you know, the statistical backing of the large lineage model itself. That said, that happens on a per incident basis. Linear can also produce a user activity context analysis, and that is a longer linear output that synthesizes the user's behavior across the power prior week to surface patterns, and that view tells a different story, from what this particular incident point in time does. So let me flip over to that really quick. So if we look at the same endpoint apps, right, we can see we should see the linear summary here. Let me make this a little bit bigger. Is that good? Alright. Yeah. Okay. So, again, we can see that, you know, this, incident caught the pasting of that Rust source code into GPT. And we see the, the difference here, the downgrade from low to informational. Now if we look at the summary of the user activity over the last week, we see that Lydia actually did pick up on a couple of important trends. One of them is that I had previously downloaded a bunch of developer tools and set set up my dev environment on this test host, and that was, you know, earlier in the month. Last week, I started putting content together for this, which came across as a suspicious pattern, because I'm working with files like confidential stuff dot TXT. I did the actual paste there to generate the incident. And most critically is this bottom overall evaluation where Lydia does actually state that the recent patterns of behavior are concerning when they're all considered together, you know, and actually identifies that the severity should probably be moderate on this, incident due to the explicit use of a noncorporate service for what is potentially sensitive work. So just something to keep in mind, is that there are those multiple surfaces for evaluation, and it is often beneficial to check all of them, right, particularly if you disagree with one of the evaluation surfaces. Alright. Let's flip back here. K. Alrighty. So, yeah, so kinda getting back to the human in the loop part, the analyst job in all of this is, to ingest all this information from the kind of consolidated, tool context and reconcile it. So, you know, two takeaways, obviously, here. One is that these tools are gonna be a lot richer when, are are gonna be a lot richer than necessarily their headline severity scores. We do wanna be ingesting the outputs. And that the LLM in this report by using these tools is doing what you would expect from, you know, your average senior analyst, I e, you know, pulling in all the contextual signals, weighing them against the individual events, and producing the synthesized recommendation. Alright. And then quick note, you know, we do have the IOCs down here, so ready to plug and play into your various tools, create Falcon rules, Cyber Haven rules, and I believe that there are some recommendations as far as how you might do that. Yeah. For example, we have the item here to evaluate the policy, that caught some of this activity and upgrading it to warn your blocking. Okay. Alright. I think the next thing that we need to cover is some of the plumbing that underpins all of this because I think a lot of you are probably sitting here thinking, okay. Is this just a a cool trick or, you know, how easy is this to deploy? And the answer is pretty easy to deploy. So we have, the CrowdStrike. Well, I'll start with CyberAven. So Dave covered that in great detail. That's what we're running here, to perform the CyberAven side of the analysis. And he already covered all of the various aspects and skills of it in greater depth, and I was planning to, so I won't, kind of rehash all that. But just know that for this particular workflow, the main tools that were used, main skills, I guess, would be managed server, which just confirms that the, you know, our MCP or our connection rather, so that tenant is active, has a couple other helper functions, and then fetch incidents by user. So that was the primary method by which we pulled all of this content from the the cyber haven tenant. The second piece is, of course, the CrowdStrike Falcon MCP. This is the open source Falcon MCP package. This one, well, among other things. Right? What we used specifically here, is the check connectivity, right, just to make sure that we're, you know, live. We got a heartbeat from the platform. And then we used, Falcon search detections, where we passed the device name and the time stamp to target our search. And, I mean, of course, the Falcon MCP also includes things like searching incidents, searching behaviors, searching vulnerabilities. Just depends on the kind of question that you ask as far as the analysis that or as far as the tools that will be used. Right? And that's because the LOM is the brain. Right? It's recognizing the investigation pattern, made the query decisions for both tools with matching filters, waited for those results, and then started reasoning and correlating. Alright. Couple of last details here, just to, you know, get some caveats out of the way. Right? So the first is that, everything that you saw as part of generating this report is, runnable read only. So, you know, no mutations. I know our plug in in particular will distinguish, well, I believe it does anyway, will distinguish between fetch operations and management operations. Management operations typically require explicit authorization. So, you know, you can run an investigation like this with read only credentials, for both tools. Second is that the LLM isn't going to see anything that the tools themselves don't return. So no raw API tokens, no cred no credentials, no console session cookies. So the plug in is the interface, and the LLM only sees the tool output. So something that is important to consider for compliance, auditing, that kind of stuff. Third, as I mentioned earlier, both plugins are relatively easily accessible. Now the Cyber Haven analyst, and keep me honest here, Dave, but my understanding is that it is available at this time, and if it's not, it should be shortly, but it will be available primarily through our support portal. So I believe we have some details on that that we will share with you. And the Falcon MCP project is, on Highpi. So if you're already running cloud code or any other MCP compatible agent, you should be able to connect, both today with basically, simple config entry. Right. Yeah. So to recap, we have one prompt, ran for two or three minutes. Two tools queried in parallel, and we got a nice tier two grade investigation report that includes our, malware incident, corporate IP exposure, and the stitching of those two things together. The point here to close out is that this LLM activity, isn't meant to replace your analysts outright. But the advantage that it gives is that it lets your analysts start at the analysis step instead of raw data collection. So your tier one and tier two people theoretically can spend their cycles deciding what should be done about the findings and not jumping between five or six different tools to determine, you know, what the state of the board is at any given point. Cyber Haven is what gives you the content layer. CrowdStrike in this instance gives you the endpoint layer, and together, you get a much richer picture than you would get looking at either tool in isolation. And all you have to do is ask it very straightforward questions. So with that said, that is my piece, and, you know, we're happy to take questions at this time. Awesome. Thank you, Nick. Yeah. So apologies for folks that have been asking questions in the q and a and the chat. We're just gonna save them up here to the end, to answer, so we'll do our best, to answer them here. Common questions that were came up was, yes, the session is being recorded. I believe we're gonna send the the link to you in the email, as soon as it's done, in case you wanna share it with other folks in your team. These, as Nick just said, right, these features are live. I mean, everything that I I demoed today and then Nick demoed today, you could do yourself. We have on the support portal now the ability to download the plug in. I think we just put the link in there, which is awesome. So you can download the zip file for the plug in. There's instructions in the support portal. There's just different commands that need to need to run if you wanna start in cloud code or codex. There's only, like, two commands each, for things, so it shouldn't be too difficult to get up and running, hopefully. But please don't hesitate to ask. If you have questions, reach out to us. We're more than happy to help, but you should be able to get up and running today. The one caveat is is that today, you will be running the MCP server locally, on your endpoint. Each user will. And coming in, again, let's, you know, June or the July release, that MCP server will, you know, be part of your console and you'll connect to it remotely like you would, in the other deployed MCP server. But if you wanna get up and running today, right, your you totally can. Feel free to, you know, get down with the plug in and get started, which will be great. Other questions here. Right? There yeah. Those questions about, you know, Chiro, Chiro. Sorry. I I don't I don't know that tool myself, but, someone asked another question about, you know, can we connect to other tools? MCP really is the, kind of the common language there that will allow you, to interact with, you know, any other tool that supports, you know, MCP, compatible, you know, client. So that will work. Skills and agents, right, are, you know, somewhat, kind of, you know, within the realm of, you know, what Cloud Code and now codec supports codecs. Just started supporting plugins like, a month and a half ago. So a little bit after, you know, Cloud Code first started with. So the exact skills and, you know, sub agents, may not, you know, directly port over to any other MCP compatible client, but certainly the tool calls will. Right? So if you did want to, you know, if you're using other tools, and you wanna interface with the CyberHaven public API data by MCP, you can, whether or not those tools, those clients support, skills and sub agents, right, is probably, kind of dependent on each of the different clients. Other questions that read it in there, can anyone use the plug in? Do they have to be an active use? Yeah. So, I mean, right now with plug in, right, you have to generate a public API key. Right? So you have to be, you have to be a console user. Right? You have to, you know, create an API. You have to give it, the appropriate role. You know, Nick did mention, right, the plug in does give you capabilities that can be both read as well as kind of, you know, update or manage. So things like incidents, you saw the workflow for me. You could close out, incidents in, you know, via the MTP server only if the API key that you generated and that you're using in that session has those permissions. So So there's a couple different ways that you can, you know, restrict that. You can create a a, you know, role that is read only for API keys and just give people read only roles. All of the skills, if you start using them, you'll see they're all, you know, are designed around running things in dry mode first, asking for user confirmation before they make changes. So none of the skills will automatically try to, you know, close incidents out or, you know, update or edit, you know, datasets or policy definitions. They'll prompt the user. They'll have to confirm it. But if you do wanna restrict that period, right, you can do that in the at the API key in the in the the the the role level. Other things too is that, you know, because it's you're doing it via the API, API key, the audit log, right, will be all the actions that you take, will be attributed to that API key. So, you know, you'll still have a a visibility in the audit log to know, you know, who did, you know, who did what. A question about last one came in about, you know, what modules are required to run this. Is it Linear AI? It's not directly Linear AI, but let me show you one slide here that I think is, you know, worth showing, because so think the MCP server as, you know, just a wrap around a public API. Right? So, you know, it's being able to access the existing public APIs that we have around, you know, end to end user risk groups and datasets and policies. Right? Just, you know, allowing for an agentic, you know, or an element driven use of those APIs. Right? So any customer, right, on on any tier, you know, can use, the CyberHaven plugin, because you have access to the the public APIs. Now with Linea, you you do get, you know, Nick showed it before. Linea is doing, you know, summarizations and analysis of events and incidents as they're being collected and has in, you know, many cases access to data that, you know, you don't have, after the fact in the public API. So a big one being, you know, content analysis. Right? So Linear will perform, analysis of the content and will do its own content classification and its own severity analysis of whatever file Nick moved to a given destination. It will enrich the incident, data that you both see in the UI, and then it's and it's accessible via the API, which means, you know, the cloud code plug in or the codecs plug in will have access to that, you know, further enriched data. And that's not something that's easy to do after the fact. You may not have access, to content. It may be stored in your evidence bucket. You have to pull it by a separate path. You may not wanna analyze the content. Right? So there's absolutely you don't have to have linear to use, the plugin. Linear just enriches, you know, as as the when the users will know. Right? It provides a really valuable enrichment for, your incidents, and then that makes, both your analysis and then the the plug in's analysis of those events, you know, the you know, that much more enriched, that much more powerful. So don't have to have it. If you do have it, the plug in can take advantage of it and benefit from that analysis as well. Dave, if you want, I can take a couple of questions as well and see some more interesting ones. So first one I'll tackle, we have a question about how we ensure that this is mentioned is exposed to these LLMs via the use of these skills and plug ins. I think, Dave covered that a little bit earlier, but, really, it's down to RBAC on whatever token, you know, for access that you end up giving it. So make sure that you keep that in mind as you provision, these various connectors to your tooling. Yeah. And and just just on that too. Right? So yes. Like, obviously, if you want to use the plug in, right, you are using CyberHaven data as part of a cloud analysis or OpenAI, you know, codex driven analysis. Right? And this would be the same manual workflow as if you query the API, pull whatever data that you pulled down, and then, you know, ask Claude or ask codex to do an analysis of that. So, at some level, right, if you do want, you know, Claude code or codex to reason over the results that you pull back from CyberHaven and reason over the CyberHaven analysis, then you you have, you know, you just have to be aware that you are, you know, that's that's a task that you're asking Claude to do. You're asking, you know, codecs to do. But, again, it's all pulled back, by our public APIs. It's the only data that's successful by our public APIs. You can control with the API key what roles and access that API key has. That would then limit, what, you know, cloud or codecs was able to pull back by the API. But, you know, just, obviously, you're using codecs to do an analysis, so, you know, codecs will be analyzing the data that that you're providing to it. Right? Gotcha. Now another interesting one, two of them that I wanna talk about first is, yes. You are correct. I did use Opus for this call. Part of that, frankly, is that, you know, we wanna we wanna showcase what the top end capability might look like if if you use the highest level model. But I think the general rule does apply that, you know, if you use if you spend more money to get an output, you should expect better results. So whether or not you use Opus or Sonet or Haiku, you should still get useful output, but you might want to make sure that you define very clearly the structure of the report and some of the investigation steps if you're gonna use one of the lower power models. Right? To make sure that they don't need to consume as much context potentially, and can work efficiently to do so. Something else that I'll pull out is that, you know, this demo flow is fairly simplistic. In a production one, you would most likely want an agent team that is working in concert where you have, like, a review process. Right? So you would have multiple passes maybe on some of this analysis with confidence thresholds, that kind of stuff. You know, because we wanna make sure that we're being accurate and that we're not letting easy mistakes happen. And then the other one that I think is, worth talking about for sure is, you know, thoughts around plugins and tools and how they'll evolve traditional SIM and SOAR. And I'll start with a quick, anecdote from a buddy of mine. He works at a massive restaurant chain as a developer, and he tells me that one of the projects they're working on is to create an MCP. This is a company that produces sandwiches. Right? They're building an MCP. The reason they're doing that is that they wanna make it so easy to order a sandwich from them that you can literally just type into Claude without any work and say, hey. Order me my favorite wrap, you know, from x company. Right? And it already has your payment information. It just sends it there and does it. I would be surprised if the security space doesn't move in a similar direction. That's my personal conjecture though, not Cyber Haven official. I'd be surprised if we are a year or two out from now and the workflow isn't basically you know, these various security products are sorting information, normalizing metadata, and then giving you access to it to run multiple agents against. That would be, you know, what I could see. Yeah. I'm just gonna take on, Laura's. Yeah. Hi. Laura, Rick, get to you again. So, yeah, the question I totally understand, right, about, you know, the kind of accidental exposure, PHI, PAI as part of the incidence and the summaries. Certainly, yes. Right? I mean, so if you think about the LinearAI, you know, summary, it does have, the AI classification, the content analysis that they've included in there. So if you did want to, you know, limit that coming back, and being analyzed by, you know, they're getting declawed or or or or codex. There is, RBAC permissions around, the con you know, like, lineage analysis content summaries. RBAC commissions, as you know, around PII, PHI. And and then, you know, you know, you would have to admit, you know, probably think through and ask, you know, the different skill of, you know, how that might impact it. Like, obviously, you can use your investigation skill where you can't see, you know, PAIs, like, the user's email address or things, you know, could be impacted. Environmental review skills or, you know, deployment health skills, probably not impacted at all. But, yeah, that's that's certainly something to think through with, you know, as you're using this. Right? Just understanding again, everything's coming back from the public API. The you're in control over what, RBAC permissions that API key has that's querying the API. And so, you know, if if you have concerns there, you should need you know, you you can you can look into it to see, alright. Well, what data am I comfortable returning in the public API that I'm comfortable sending, to Claude and then craft the RBAC role appropriately and then just, you know, look at what skills might be impacted by that. Cool. One I see one other question that I don't think we've addressed, and that's from Brett Morrison. Can you have it automatically generate tickets with ServiceNow APIs and include any reports and evidence as attachments? So it's a great question. Obviously, you know, I would say it's most likely doable. You know, I don't think we've I haven't tried to do that myself personally, but if there are APIs accessible, and better yet an MCP for something like ServiceNow, then Claude can definitely bridge the gap for you. I would be surprised if it couldn't. Yeah. Yeah. So on on the front, there is a ServiceNow MCP. Right? So that demo that you saw Nick do where it was, you know, pulling data from multiple, you know, environments from Cyber Haven and then from CrowdStrike, just imagine that exact same scenario with, you know, Cyber Haven and ServiceNow. Now our out of the box plugins are kind of Cyber Haven only. Right? So they don't have these kind of multipart workflows, but there's nothing to stop, you know, you from, you know, forking or extending, that incident analysis workflow to say, hey. For me, you know, we use Jira or me, we use ServiceNow or we use whatever. Right? And then just connect the MCP server to your instance of Jira or ServiceNow or what have you, and then, you know, upgrade, update the skill, to do that workflow for you. So that that'd be a really exciting thing. We'd love to hear, kind of how that goes, and and we'd love to help you if there's anything we do on our end, to to make the cyber haven side of it easier. Yeah. My recommendation, just as a quick aside, would be look into something like paperclip that might be out of date now, but something similar that can orchestrate multiple agents, probably what you wanna do for that. Right? You probably want, like, a ticket creator and all that. So yeah. If you if you make it happen, definitely let us know through some channel. I'd be curious. Oh, I think we have addressed most of the major questions. Sorry. One new one just came in. Oh, that's Amy just saying thank you all. Yeah. I guess, again, to wrap up, like, the plug ins available there on on the the portal, again, we welcome you. More than excited to gonna hear how things go, if you you like to get up and running with it. Again, it should work equally in claud or codex as instructions for both of them. We're gonna continue to, you know, enhance and improve this. Like I said, a big change will will come, late June or July when that MCP server moves remote. We'll keep pushing, you know, new versions of the plug in that, have, you know, new skills, improved skills, updated skills, etcetera. And so we'll you know, again, we'd we'd love for you to get started. And then, you know, we'd we'd love to hear how it goes. And so and on the product side, again, I'd love if you start, you know, you know, playing around with this and using it and would like to have a a meeting with me to discuss, any specific use cases or challenges you ran into, but I'd love to hear how it goes. Otherwise, work with your CSMs, and and we'd love to, you know, get you onboarded with this. Cool. Amy or Nick, anything else on our side to wrap up before we go? No. I think I think we're good. Oh, one last question. Can we use regular cloud desktop, chat g p t? Yeah. So I I I'm using, like, both the demos you saw today from me and Nick, we were using the cloud, the app, the the macOS app. It works in the CLI. Right? So you have cloud in the CLI. It works there. I just like the the the desktop app as a better experience. So it absolutely works there. You have to have, Claude Co, though. I don't think the coding based plugins work in Claude itself, or in cloud co work. Right? You have to have cloud code because it it it does need to, you know, execute and run code. But, yeah, like, everything I do is in the desktop app, but you can use in the CLIs if if you want as well. Perfect. Alright. We'll wrap up there. Thank you so much, everyone, and, would love to hear how it goes. Thanks.