For all the benefits AI brings, it has quietly opened a dark door that needs significantly more attention. Deepfake voice scams, synthetic phishing emails, impersonation attacks and AI-generated malware are rapidly increasing. Thanks to cheap, easy-to-use generative AI tools, cybercriminals now have faster, more affordable and more effective ways to launch hard-to-detect, high-impact attacks.
This shift marks a turning point in the cybersecurity arms race. AI is giving threat actors a new, decisive edge while making it difficult for individuals, enterprises, and governments to keep pace. It is evident by a 223% year-over-year surge in the deepfake tool trade. Yet, few organizations can say they’re truly prepared. Some underestimate the threat, while others lack the technical depth, resources, or speed needed to respond. Legacy systems, siloed data and lack of upskilled workers are holding them back.
As offensive AI grows stronger, defensive AI must evolve even faster. Traditional safeguards, such as firewalls, vulnerability management, and network segmentation, are still essential, but they’re no longer enough on their own. In short, to combat AI-enabled threats, organizations must embrace AI-enabled defenses.
AI has revved up social engineering
Generative AI has revolutionized the economics of cybercrime by removing the technical and human limitations in one of its primary tactics: social engineering. AI enables the creation of accurate, tailored and compelling messaging, graphics, web pages, applications and more. Cybercriminals with limited technical and language capabilities can now orchestrate large-scale, realistic, multilingual, and psychologically manipulative attacks that more successfully deceive individuals into taking actions they would not otherwise take.
Using a few comments from an earnings call or a clip from a podcast, for instance, AI tools can quickly clone an executive’s voice, allowing criminals to impersonate the boss in voicemails, calls and even video chats.
This isn’t just theory; it’s happening. One multinational engineering firm lost $25 million last year after an employee fell victim to a deepfake video call impersonating the CFO.
The rise of synthetic social engineering is making trust a liability, which means defenses must shift accordingly. Systems must transition from relying on identities to validating behaviors.
Modern resilience requires not only multi-factor authentication at login but layered validation protocols throughout every network. Tools such as known-number callbacks, where a call to a preset phone number is required as part of the verification process, and session-based authentication, which can include defined expiration times and other security measures, are now essential.
The threat of evolution also means the traditional phishing training many organizations rely on is no longer sufficient. Static training modules and one-size-fits-all educational videos are no longer sufficient. Proactive, adaptive training strategies are now required to sharpen employees’ judgment under pressure.
Defensive AI adoptions lag
Many CEOs and board members remain concerned about the ROI of AI for cybersecurity. Meanwhile, those eager to invest often run into resistance from compliance teams stonewalling adoption.
Reluctant executives and budget hawks can shoulder some of the responsibility for slow AI adoption, but they’re hardly the only barriers. Increasingly, employees are voicing legitimate concerns about surveillance, privacy and the long-term impact of automation on job security. At the same time, enterprises may face structural issues when it comes to integration: fragmented systems, a lack of data inventory and access controls, and other legacy architectures can also hinder the secure integration and scalability of AI-driven security solutions.
Meanwhile, bad actors face none of these considerations. They have immediate, unfettered access to open-source AI tools, which can enhance the speed and force of an attack. They operate without AI tool guardrails, governance, oversight or ethical constraints. This allows them to weaponize tactics, innovate and move faster than average security solutions and teams can keep pace.
This asymmetry is real but not entirely insurmountable. When scaled properly, defensive AI can help security teams level the playing field. However, for that to happen, organizations must first overcome the structural, cultural and technological hurdles that prevent secure adoption.
Identity and behavior are the new perimeters
Legacy security strategies built around static credentials and trusted network zones may no longer be valid. Meaning it’s time to change more than just passwords.
That’s not to say firewalls, VPNs and endpoint protection aren’t critical, but they are severely challenged by today’s hybrid work models, deepfake impersonations and live-off-the-land tactics.
Modern defenses must center on identity validation, role-based access controls, and behavioral context. This “must” for modern defenses is because we’ve entered an era where the questions are:
Who is in your systems?
What can they do with that access?
What are they doing?
Is their account behavior expected and acceptable in that moment?
The traditional act of logging into an application or system is no longer sufficient. Static authentication provides a single point of trust – and a single point of failure. Today’s advanced behavioral biometrics, for example, can validate a user throughout their session by analyzing typing cadence, navigation flow, and even mouse movements. AI models can then detect suspicious behavior, such as unusual access to data and privilege changes or timing inconsistencies, that may indicate a compromised account or insider threat. This type of security, however, will also be challenged by advancing AI technology that learns a user’s or application’s normal behaviors and mimics that, too.
Session-based authentication extends that trust, ensuring it’s maintained across the entirety of a user’s interaction, not just at access. Likewise, multi-factor authentication (MFA) must be selected carefully. One-time passwords delivered via SMS or email are increasingly susceptible to SIM swapping and spoofing attacks. More resilient methods, including app-based MFA and biometric authentication, offer stronger safeguards.
Insider threat detection is also maturing. AI models can detect suspicious behavior, such as unusual access to data, privilege changes or timing inconsistencies, that may indicate a compromised account or insider threat.
Early adopters, such as financial institutions, are using behavioral AI to flag synthetic identities by spotting subtle deviations that traditional tools often lack.
They can also monitor behavioral intent signals, such as a worker researching resignation policies before initiating mass file downloads, providing early warnings of potential data exfiltration.
Organizations should start by understanding the National Institute for Standards and Technology’s Artificial Intelligence Risk Management Framework (AI RMF 1.0) and the Cybersecurity and Infrastructure Security Agency’s (CISA) Zero Trust Maturity Model. These guides will help ensure consideration of factors that are not commonly understood.
As cybercriminals adopt generative tools to attack with greater precision, speed, and scale, defenders and tech executives must act accordingly. Remaining reactive or risk-averse is no longer an option. Organizations should invest in AI for automated defense as well as a real-time understanding of identity, context and behaviors. In addition to hardening every system, the goal for every tech executive should be to help their teams outlearn and outpace attackers in real-time. The AI arms race in cybersecurity is well underway, and one side has a clear lead.