New research from HackerOne finds a widening gap between the rapid adoption of artificial intelligence and the level of security testing applied to those systems. The company describes the issue as an AI security gap, reflecting a disconnect between how quickly organizations deploy AI and how consistently those systems undergo formal security testing.
The report shows that AI use is expanding across most organizations. Ninety four percent of respondents say they operate more AI or machine learning systems than they did a year ago. However, testing coverage has not kept pace. Only 66% of organizations say they formally test 61% or more of their AI or ML systems, creating a 28 point AI security gap.
Organizations operating within that gap appear more likely to experience security incidents. According to the research, 89% of security leaders at organizations with limited testing coverage reported AI related attacks or vulnerabilities during the past year.
The gap also carries financial consequences. Security leaders working in environments with lower testing coverage report 70% higher annual remediation costs compared with organizations that test nearly all of their AI systems.
“AI systems are dynamic, evolving with every model update, integration, and data connection and the same is true of modern digital systems overall,” said Kara Sprague, CEO of HackerOne. “As systems become more interconnected and adaptive, risk evolves in real time. Periodic testing assumed stability. Today’s reality requires continuous testing so leaders can detect change, identify what’s exploitable, and mitigate risk before it materializes.”
The findings are based on a survey of more than 300 security leaders across six countries and highlight structural trends shaping AI risk exposure:
- AI risk compounds as deployments scale: Organizations expanding from a small AI footprint of two systems to a larger footprint of eight to 10 systems experienced 82% more attack types reported and 2.4 times higher attack costs. As AI systems integrate with APIs, enterprise tools and internal data sources, exposure can increase quickly when testing does not scale alongside deployment.
- Testing coverage is not keeping pace: While 94% of organizations added AI or ML systems in the past year, only 66% say they formally test 61% or more of those systems. Across all respondents, 84% experienced at least one AI related attack or vulnerability in the past 12 months. Organizations testing 91% or more of their AI systems are 16% less likely to report an AI related incident than organizations with lower testing coverage.
- Shadow AI remains a material blind spot: Only 55% of organizations report they fully track unsanctioned or “shadow” AI usage. When employees independently integrate AI tools into daily workflows, organizations can lose visibility into how those tools interact with enterprise systems and data. Unmanaged use can expand the attack surface and introduce governance and compliance risks.
“Organizations keep adding AI systems without thinking about the blast radius,” said Luke Stephens, security researcher. “These aren’t sandboxed toys. They’re hooked into real data, real APIs, real decision-making. When something goes wrong, it doesn’t stay contained. The cost data in this report reflects what I’ve been seeing in the wild: the longer you wait to test, the more expensive it gets to fix.”
As AI systems move further into production and regulatory scrutiny increases, boards and executive teams are seeking clearer evidence of oversight. Continuous testing is increasingly seen as both a cybersecurity best practice and a governance requirement.
The report points to a broader reality for organizations deploying AI at scale. Each new integration adds potential exposure. When testing fails to keep pace, organizations risk losing visibility into what vulnerabilities may be exploitable. Closing the AI security gap will require embedding continuous security testing into how AI systems are built, deployed and governed.

