CISOs: Anchor AI Security Budgets in Risk, Not Fear

Heading into 2026, CISOs are facing immense pressure, both internally and externally, to support implementations of and provide protections or AI agent security. However, the business pressure to adopt agentic AI is often driven more by industry hype and fear of missing out headlines than actual operational risk. As a result, organizations that budget for AI security based on fear will likely either panic-overspend on controls they don’t need or perilously under-invest where it actually counts.

Most organizations should expect to allocate a portion of their cybersecurity budget to agentic security in 2026. This investment will fund governance workflows, behavioral and anomaly monitoring, model and agent observability, and identity boundaries for non-human agents. That said, the allotted allocation only becomes meaningful once it is anchored to a business impact analysis.

To avoid unnecessary or miscalculated spending driven by fear or hype, the driving force should be understanding the actual organizational risk if an agentic workflow fails or is manipulated.

Anchor Budgets to Actual Risk, Not Worst-Case Scenarios

The AI security landscape is full of anxiety-inducing scenarios: agents going rogue, prompt injection attacks, data exfiltration at machine speed, compounding failures across systems. These risks are very real, but reacting to them with anxiety is the wrong response. The solution is to anchor AI security spend to business impact through a formal business impact analysis (BIA), which provides a systematic process for identifying what happens when a critical function fails and quantifying the maximum credible loss. BIA is the antidote to budgeting based on vendor claims or worst-case scenarios divorced from your actual operations.

Understanding this complexity upfront, through BIA, helps you invest in the right detection and response capabilities rather than guessing what might be needed. The BIA framework asks: Before shifting critical business processes from deterministic, rules-based logic to AI-driven workflows, what happens if the agent takes an incorrect action, hallucinates, or is manipulated? What is the maximum credible loss?

When agents interact with revenue systems, regulated data such as personal health or financial information, or high-value transactional workflows, the potential for loss or negative business impact increases. If the organization’s risk appetite is conservative, these higher impact use cases will naturally push the spend upward. This is rational budgeting: attach spend to a formal BIA and align investment with the organization’s tolerance for damage if an AI-driven agent fails or is compromised. Fear-based budgeting skips this analysis, either overinvesting in low-risk scenarios or missing critical gaps entirely.

Understanding Threats Without Overreacting

Autonomous agents fundamentally shift the threat landscape, and understanding that shift is essential to rational budgeting. However, understanding the threat does not have to equal fear-based spending in response to it. Autonomous agents break our long-standing dependency on human intention and predictable logic paths. They make decisions dynamically, based on model inferences and environmental inputs, and those decisions are not deterministic.

This shift also blurs trust boundaries. A human operator cannot see the internal reasoning of the underlying model, and the agent often interacts with systems assuming the inputs it receives are legitimate. When those boundaries blur, attacks once confined to the application layer suddenly become possible across the entire workflow. A well-crafted malicious prompt or manipulated data input can instruct an agent to act on connected systems with full confidence.

Indirection prompt injection is a clear example of how this works in practice. A customer support agent summarizes content from systems like the company CRM or cloud drives as part of its normal duties. If an attacker embeds malicious instructions, such as directing the agent to retrieve a customer list and send it externally, the agent will probably comply using its legitimate access. The trust boundary shifts because the company is no longer relying on human or fixed logic to interpret intent before acting. In AI drive agentic systems, data becomes executable instruction within the workflow.

These attack scenarios are not theoretical and should inform your threat model. However, they shouldn’t be driving your budget in isolation. AI systems operate within the full technology stack and should be integrated into a defense in depth security program. This means identity and access change, as well. Agents often operate with blended entitlements inherited from humans, systems, and tooling. Without strict scoping, they can accumulate privileges no single human or system should be granted. The key question for budgeting isn’t “could this happen?” but rather “what is the business impact if it does, and how much should we invest to prevent or detect it?” BIA helps you determine which of these expanded attack surfaces actually threaten your critical operations and deserve proportional investment.

Implement Governance That Enables Innovation, Not Just Control

Many CISOs respond to AI agent anxiety by drafting restrictive policies that can slow company AI adoption. While this is often done out of an abundance of caution, it can lead to greater business risk by holding the company back. In most organizations, shadow AI agents emerge because the barrier to creation is low and the business incentives to experiment are high. The goal is not to prohibit agentic AI use, it’s to shape responsible adoption.

Strong starting points for overall AI governance include ISO 42001, which integrates well with ISO 27001, and the DRAFT COSAIS (Control Overlays for Securing AI Systems) for NIST 800-53, which complements the NIST AI Risk Management Framework. These provide control baselines for responsible AI deployment, but they are not tailored exclusively to agentic AI. Emerging work from the OWASP GenAI Security Project and the Cloud Security Alliance AI Safety Initiative on agent and tool security can help fill those gaps and inform enterprise policy.

Policies alone do not mitigate shadow agent creation. They must be paired with prescriptive operational guidance, which makes secure adoption the default. This includes identifying approved agent platforms, defining clear creation and registration requirements, establishing acceptable use boundaries, and documenting decommissioning expectations. This level of clarity reduces the incentive for employees to bypass governance because the approved path feels easier, faster, and safer than going around it.

Establish a central agent registry, require purpose justification for every new agent, and enforce governance gates before an agent interacts with production systems. This approach also prevents spending money on policies and tools that create the appearance of control without actually reducing operational risk.

Invest in Visibility First, Advanced Controls Second

A common fear response is to purchase the most expensive or sophisticated AI security tools available. This seems reasonable, but without foundational visibility, those tools solve problems you may not have while missing the ones you do. You can’t defend what you can’t see. You need to know which agents exist, which identities are tied to them, what permissions they have inherited, and who or what can interact with them.

Strong lifecycle and identity management is essential. Agents should not accumulate long-lived tokens or ambiguous entitlements. Least privilege must apply to AI-driven agent identities with even more rigor than for human users. Without clear scoping and decommissioning, access can sprawl. Prototype agents may retain access to cloud storage and customer data in the CRM long after testing, while production agents can inherit overextended administration privileges that are implicitly granted to every user interacting with them. Visibility helps you discover these misconfigurations before investing in behavioral detection that would never catch them.

Continuous monitoring of agentic AI workflows needs to include AI-aware behavioral and anomaly detection. Traditional SIEM logic will not detect an agent that is drifting off-mission or misusing legitimate privileges. Build your investment roadmap in stages: visibility and inventory first, then lifecycle controls, then behavioral monitoring. This prevents the costly mistake of deploying advanced detection for agents you didn’t know existed.

Prepare for Regulation Without Overbuilding

Right now, regulatory clarity is uneven. Even without full harmonization, early regulatory signals point in the same direction: transparency, documentation, and human oversight for AI systems. The EU AI Act requires organizations to maintain technical documentation, provide auditability of system behavior, and ensure high-risk AI remains subject to effective human oversight.

The most defensible approach is to invest now in controls that strengthen both operational resilience and future compliance: observability across agent workflows, full auditability of every action, strong identity boundaries, and governance workflows that scale as regulatory expectations mature. Align upcoming spend with transparency, auditability, and oversight capabilities so regulatory shifts align with your program instead of forcing reactive transformation. However, don’t overbuild for hypothetical regulations. The fundamentals, visibility, governance, audit trails serve operational risk management now and will satisfy most reasonable regulatory frameworks ahead.

Focus on What You Can Measure and Improve

As agentic AI becomes part of the organization’s highest-value workflows, security investment will shift from experimental line items into core business and cybersecurity planning. Overall spend will increase because business dependency on autonomous systems will increase. More spend does not mean uncontrolled spend, and it certainly doesn’t mean fear-driven spend.

Organizations that build visibility into where agents operate, how they interact with data stores and APIs, and what their failure modes look like will be able to calibrate investment. CISOs who understand the true operational footprint of agentic AI may determine some agents require hardened entitlements, behavioral monitoring, and audit trails, while others function safely with lightweight controls. Visibility becomes the most powerful cost optimizer because it prevents over-engineering and focuses resources where agent risk is genuinely material.

Conduct a holistic assessment of your agent footprint, build a three-year security roadmap anchored in real use cases, and refine investment as new threat models emerge. The organizations that thrive will be those that treat agent security as a measurable discipline grounded in business impact, rather than reaction to the loudest vendor pitch or the most frightening headline.

ABOUT THE AUTHOR

Diana Kelley

Diana Kelley is the chief information security officer (CISO) at Noma Security, where she serves as a trusted advisor to customers while spearheading strategic programs to support continuous innovation and AI security leadership. Her past career experience includes serving as CISO at Protect AI and other senior leadership roles at major technology and cybersecurity companies, including Microsoft, IBM Security, and GM at Symantec. A recognized voice in the industry, Kelley serves on multiple advisory boards including WiCyS, The Executive Women’s Forum (EWF), and InfoSec World.

Risk Management Strategies for Tech Startups
According to the Bureau of Labor, two out of every 10 businesses fail within their first year of operation. If...
READ MORE >
Mitigating Security Fatigue: Safeguarding Your Remote Team Against Cyberthreats
Is your team suffering from security fatigue? Endless tech security requirements, warnings, and news stories can get overwhelming. At the...
READ MORE >
Building Personal Resilience Through Gratitude and Meditation
Subscribe to the Business Resilience DECODED podcast – from DRJ and Asfalis Advisors – on your favorite podcast app. New...
READ MORE >
Strategies for Effective Risk Management for Outdoor Events
While there are many perks to having an outdoor event — fresh air, less maintenance, and reduced costs — there...
READ MORE >