Video: 2026 AI Adoption & Risk Report | Duration: 1340s | Summary: 2026 AI Adoption & Risk Report | Chapters: AI Adoption Insights (4.64s), AI Adoption Insights (256.4s), AI Adoption Accelerating (391.795s), AI Risk Distribution (546.39s), Shadow AI Risks (630.59s), AI Data Exposure (736.685s), AI Agent Adoption (828.465s), Key Takeaways and Conclusion (910.865s), Reducing Shadow AI (1211.75s)
Transcript for "2026 AI Adoption & Risk Report": Hi, everyone. My name is Kasi Anamali. I'm a director of product here at Cyberhaven. Today, we will share some critical insights from our 2026 AI Adoption and Risk Report. If we think about what fuels AI, it's the data, and protecting data used to be a lot easier. You had a defined perimeter, your employees and files were within that perimeter, you scanned the data, classified them, applied labels to those files, and used those labels for enforcement. Then starting about ten, fifteen years ago, we had a new era with cloud computing, SaaS and collaboration tools that democratized data. People talk about how this broke the network perimeter, but it also broke the file perimeter because you had people and data all over the world creating data fragments and you can't really label a data fragment. Think about the way we work now. You take a snippet from here, a snippet from there, copy some of the customer data from one system and put it into a document or slide. Then someone makes a copy of that and makes their own edits. They will send that around and pretty soon you have fragments of sensitive data multiplying and flying completely under the radar because there's no metadata labels on those snippets. Today, in the AI era, we have AI tools that democratize intelligence, creating exponentially more fragmented data, derivatives and pathways. On the one hand, this is obviously exciting because of what AI can do and how much more productive it can make people. But on the other hand, this AI adoption dials up the risk by 100x in cases even 1000x. Now you can have one person that acts like an entire department of workers, which magnifies the human risk. But on top of that, the AI introduced the new human like risk from AI agents, which move data faster and at a great scale, but can make mistakes or be compromised like humans. So the net result of all of this is that data has really broken free and the data is the one that fuels the AI. That's how philosophically we believe that there are three key things to protect data where it lives and goes. You need a holistic visibility and control. The security that spans the entire lifecycle, where data lives, where it goes, data at rest and data in motion. Now data doesn't stick to one location, it moves around. So that's why it is important to have deep understanding of the data starting with end to end TEDL lineage. Finally, it has to be easy to operate. It has to be quick to deploy, simple to tune, automated, and non disruptive to users. Otherwise, it becomes just a fancy shelfware. And to address all of this, we have built the Cyberhaven AI engine that allows to deliver the best in class solutions for data security posture management, AI security, insider risk management, and data loss prevention all in one solution. The beauty of this is that it's unified, truly integrated to follow the data as it really moves and also protect the AI systems and tools that are accessing our training based on those data sets. Now let's jump in and talk about the AI adoption and risk report. Our very own CyberHaven Labs threat research team released the 2026 report recently. And this is based on billions of real world data moments around generative AI SaaS applications and endpoint AI applications and AI agents running on endpoints. Our research reveals the growing gap between AI experimentation and risk patterns as usage deepens across business workflows. I will now share few of the critical insights from that report and will encourage you to take a closer look at the report for more details. First one, today in most companies, the way we work is very fragmented. A domain expert or a business user has to interact with multiple systems, be it CRM, ERP, data warehouse, workspaces, and various apps, right, just to get work done. They manually pull the data, generate reports, update dashboards, and trigger actions across different tools. What leading organizations are starting to do differently is introduce AI agents as the central interface. Intro the user jumping between systems, the user becomes the master by giving simple prompts to the agent. Right? The agent then connects to all of these systems through APIs, MCPs, and other mechanism, gather the right information, generates the reports or dashboards, update applications, and even executes actions like creating orders or approvals. The human stays in the loop, providing guidance and approving actions, but agent orchestrate the work across systems. This shift is where we see AI agents aren't a future concept, they are already here. And the leading AI first frontier companies, the 5% are already adopting them today. Another major shift we are seeing is the explosion of AI applications usage inside companies. Research shows that the average company today is already using around 70 AI apps across the organization, everything from copilots and chatbots to AI powered analytics and automation tools. But on the other hand, if you look at the top five AI first frontier companies, the numbers jump dramatically close to 200 plus AI applications are being actively used. So the takeaway here is that AI is not just growing. The AI adoption is accelerating fast and the leading companies are experimenting with multiple AI tools across different teams and workflows and they are supportive of their entire workforce to try and use AI applications to improve the productivity. This creates a huge opportunity, but also a challenge. How do you manage, govern, and securely integrate all of these AI applications? Because the companies that figure out that will be the ones that unlock the real productivity gains from AI. And also to highlight, it's not just growing, it's accelerating across every industry. If you look at the chart on the left, we are seeing a massive jump in employee usage of AI tools between 2024 and 2025. In technology companies, usage is approaching 40% of employees. But even in industries like retail, finance, healthcare, and manufacturing are seeing massive adoption. So AI is no longer limited to tech teams, it's spreading across the entire workforce. But here's the challenge. When we analyze the AI applications being used in organizations, the majority actually fall into high or critical risk categories. These risks includes data leakage, model misuse, compliance issues, and uncontrolled access to sensitive information. So while companies are racing to adopt AI for productivity gains, the risk and governance are becoming just as important as innovation. The organizations that succeed will be the ones that enable AI adoption, but do it securely and responsibly. Now let's look at the risk distribution of the top 100 AI applications being used in organizations today. What stands out immediately is that very few AI tools fall into the very low risk category, about 4%. A small portion sits in the low risk category, which is around 14%. But the majority of the applications fall into medium, high or even critical levels. In fact, 41% of these AI apps are classified as high risk and another 19% into the critical That means more than half of most popular AI tools being used in enterprise today carries significant risk, whether that's related to data exposure, privacy concerns, model behavior, or compliance issues. The example shown here tools like DeepSeek, Leonardo, Jasper, and others highlight how diverse the AI ecosystem has become, and also the usage patterns. So the key takeaway is AI adoption is exploding, but understanding and managing the AI risk becomes just as critical as adopting the technology itself. And the organizations that succeed will be the ones that balance innovation with strong AI governance and security. Another growing issue we are seeing in the enterprise is shadow AI. Employees using AI tools outside of the approved corporate systems. If we look at usage patterns for tools like ChatGPT and Google Gemini, a significant portion or access is happening through personal accounts or free versions of these tools rather than the corporate managed environments. That means employees may be interacting with AI tools that security teams can monitor, control, or govern and the data that is being exchanged to these AI tools on the free version or personal version can be used for training the model behind those tools. And this is not just a theoretical risk, it's already happening in the real world. You may recall one of the recent security headline, the acting head of US CISA Cyber and Infra Security Agency was reported to have uploaded sensitive government contracting documents into the public version of chat GPD, which triggered an internal cybersecurity alert and Department of Homeland Security review. The key takeaway is simple. AI adoption is happening faster than governance. If organizations don't provide secure enterprise approved AI tools, employees will naturally turn to personal accounts, creating data exposure risk that security teams can't see or control and increase the, data exfiltration risk through AI tools in this fashion. Another important issue we are seeing with the adoption is what kind of data is being shared with these tools. When we analyze the AI conversations, both prompts and responses at the browser level and also at the application level on the endpoints, about 40% of the data flowing into these AI tools are classified as sensitive. That includes customer information, PII, PHI, healthcare, proprietary code, financial data and internal documents. And this often happens unintentionally. A couple of examples, we have seen a healthcare employee uploading patient records into personal AI assistant, not realizing that the prompts may be stored or used for training. The other one is a developer using free version of the AI coding assistant tool and indexing the entire company database, potentially exposing the intellectual property to external provider. The key point is that AI tools are incredibly powerful, but they also create new data exposure risk and creates new threat vector in that aspect. As companies adopt AI at scale, there needs to be mechanism to control what data can be shared, monitor the usage, and protect the sensitive information in line without slowing down the innovation. Beyond AI apps, we are also seeing the next wave of adoption and without a surprise, the AI agents. On the left, you can see about 28% of enterprises are already developing AI agents in 2025. That means more than one in four organizations are actively building systems where AI can take action, automate workflows, and interact with enterprise systems. This number is growing quickly as companies move from simpler AI assistance to AI driven automation. On the right, we see the platforms organizations are using to build these agents, the agentic AI platforms. Microsoft Copilot dominates the space today with roughly 87% of users, largely because it deeply integrated into the enterprise software ecosystem. And other platforms we have seen are Glean, Innatem, and few other emerging agentic AI platforms. The takeaway here is that AI agents are quickly moving from experimentation to enterprise deployment and they are becoming the foundation for how digital work will be automated. With that, I want to quickly summarize some of the key takeaways from our research report on the AI adoption and risk. Emerging gap on the AI adoption, the frontier companies are faster to adopt and there are also the cautious ones. And there is a huge gap between that. Understanding the real risk of the second wave. There are one third of employees access AI tools where personal accounts today creating shadow AI risk and complete blind spot and lack of visibility for infosec team. And the third one is how agents and coding assistants are changing the game. Nearly 50% of developers are using coding assistants in some form or shape today. 23% of enterprises have adopted agent building platforms and we shared some stats in the previous slides. So that's the high level summary of some critical insights that we were able to find based on researching millions of AI conversations interactions in our customer base and wanted to share with you all. Thank you for listening and have a wonderful day. I'm happy to take any questions and if you haven't already please take a look at download and read through our 2026 AI risk adoption report and we'll put that in the chat link so that you can access them and read through at your convenience. Thanks for watching. K. I see a couple in the chat, in the q and a section. How can enterprise, encourage rapid adoption without really losing the control of sensitive data and compliance obligations? Great question. Thanks for asking. I would say the key is to enable AI safely, not to block it because AI is one of the lifetime tech that is changing how we operate. That means giving employees the approved AI tools, clear usage of policies, and real time controls that stop sensitive data from being exposed is required. Right? So the winning approach is governance that moves at business speed. So the infosec and security team who are going to come up with those governance guardrails will need to be enablers to embrace AI and let the workforce adopt AI to improve productivity. That's that's the winning approach I would recommend. K. Any other questions? We'll check it out. Okay. There is another one, which is with AI agents and coding assistance gaining traction, how should security leaders rethink governance? Right? Great question again. Yes. Absolutely. The AI agents and coding assistance are changing the game. And from security leaders' perspective, I would say, treat AI agents like digital insiders with access, permissions, and risk exactly how you would look at as a human user within your organization or human employee. Right? Because AI agents can pretty much take all the privileges and access the human employees will have and now can perform autonomous task actions on behalf of humans. Right? So it can make mistakes. It can hallucinate. It can do wrong things. So the observability and monitoring of those AI agents is really, really critical. So you need to have the tools and visibility and observability to monitor how these AI agents are acting on behalf of your workforce within your environment. So that means not just visibility, monitoring, observability, what data they can reach, what actions they can take, and whether those actions align with the acceptable AI usage policy is really, really critical. So that's that's another piece to look at, especially in monitoring the autonomous actions by AI agents. K. One more just coming in. Let me read through it. Okay. So this is interesting. As employees increasingly use personal AI accounts and unapproved tools, what is the most effective way to reduce shadow AI? Right? Great question. I think first and foremost, the employees are moving to personal or free versions of AI tools because the corporate systems do not provide those tools first in place. Right? So that's what drives the employees to use some of the free tools and not understanding the repercussions of sending the data to those tools, which can inherently train the model behind, which means, like, you're leaking corporate data unintentionally. Right? So the first thing is to create awareness, enable the workforce with the right tool sets, and also educate and run a campaign to say, like, these are all the tools approved for certain task. These are all the sanctioned ones. And anything beyond that, please reach out to the IT team, procurement team so that we can explore on use case basis. And, also, the visibility education and lightweight enforcement works much better than blanket bans. Like I said, again, AI is not to be blocked, but AI is to be governed and embraced. So how you're structuring those use cases and having the policies and governance around, that's gonna drive these behaviors less from user perspective and move to corporate approved AI tools and agents mostly. Alright. Those are the questions we have. If there are any last minute questions, please let us know. If not, really appreciate. Thank you for listening, coming on board. Hope you found this useful. Thank you.