Generative AI is enabling a surge in sophisticated scams
Scammers are no longer relying on phishing emails or robocalls. Thanks to generative AI, today’s fraudsters can replicate voices, create deepfaked video calls, and build entire synthetic identities using just scraps of online data. The threat has become so advanced that even Google is stepping in. As of May, the company has rolled out new AI-powered tools in Chrome to detect and block scams in real time, including impersonation attempts and suspicious pop-ups.
This results in a fraud landscape that’s nearly unrecognizable from even five years ago, and exponentially harder to police.
In 2024, a finance employee at the multinational firm Arup in Hong Kong was tricked into wiring $25 million after a deepfake video call convinced them they were speaking to the company’s CFO. It wasn’t real, but the consequences were. Osborne & Francis states that incidents like these are becoming increasingly common.
According to Business Insider and Wall Street Journal projections, AI-driven fraud could cost Americans over $40 billion by 2027. Yet, US law is lagging behind. Many state and federal statutes don’t yet address the nuances of AI-generated deception, especially when it comes to assigning liability.
Legal Challenges:
- Deepfakes complicate attribution, making it harder to prove who “said” or “sent” something.
- Victims struggle to verify the falsity of digital communications or recordings.
- Legal definitions of impersonation, identity theft, and consent need modernization to keep pace with AI’s capabilities.
“The legal system wasn’t built for this level of deception. We’re seeing cases where individuals and corporations fall victim to incredibly realistic deepfakes or AI-generated conversations, but the burden of proof still falls heavily on the victim.
The problem isn’t just the technology but the legal lag. Our statutes on fraud and impersonation are outdated. Prosecutors and judges are trying to apply 20th-century laws to 21st-century crimes.
Victims of AI-powered fraud, whether individuals or businesses, often don’t know how to pursue justice. Most people don’t realize that something as simple as screen-recording a suspicious video call or saving metadata from an email can make or break a case. As AI becomes more powerful, the law must evolve, or victims will continue to pay the price,” says Legal Expert, Joseph Osborne from Osborne & Francis.
Osborne’s advice to businesses and individuals includes:
- Establish verification protocols for financial transactions, especially over video or email.
- Educate teams on how to recognize synthetic voice/video anomalies.
- Consult legal counsel early if you suspect AI was involved in a fraudulent act.
As generative AI tools advance, so do the scammers using them. The challenge for the legal system is not just catching up, but staying ahead. Legal professionals are now leading the charge in protecting victims, pushing for smarter laws, and redefining what legal protection looks like in an AI-powered world.