Surfshark’s report shows that the first half of 2025 had almost 4 times as many deepfake incidents (a total of 580 incidents) as the entire year of 2024, and losses from deepfake-related fraud reached $410M. Overall, since 2019, deepfake technology used for fraud has resulted in $897M in losses.
“The trajectory of how many incidents happen and how much financial loss they generate is very concerning. As deepfake technology evolves so fast, it is getting easier and easier for criminals to use it for fraudulent activities, especially as no concrete regulations are yet in place to stop them. And even though many actions are being implemented, like Europe’s AI Act, Denmark’s copyright law reform, and U.S. AI bills are underway, yet still the deepfake technology will continue to advance faster than authorities can actually prevent fraudulent incidents from occurring,” says Tomas Stamulis, Chief Security Officer at Surfshark.
Also, T. Stamulis points out that criminals target both businesses and individuals, with businesses losing 40% ($356 million) and individuals 60% ($541 million) of the $897 million total. Individuals are more at risk as they are easier to manipulate, and they are less likely to implement sophisticated security measures.
- The most common deepfake fraud activity is impersonating famous people to promote fraudulent investments, which resulted in $401 million in losses.
- Another method favored by cybercriminals is impersonating company executives to trigger fraudulent transfers ($217 million).
- Another type of fraud involves using deepfake technology to bypass biometric verification systems in order to take out loans or steal data ($139 million).
- Lastly, romance scams, which are widely used by criminal groups, have caused $128 million in losses.
Considering the future evolution of deepfake incidents, T. Stamulis thinks that the number of deepfakes will continue to rise, however, eventually, people will become immune to them. For example, now, when a grown-up person receives ransomware with an explicit fake picture of themselves, their immediate instinct is to comply or go to the authorities. However, in the near future, we will get so used to seeing deepfake content of ourselves and others, and we will not be so easily manipulated, but rather just ignore them.
“To achieve this, we need a strong emphasis on educating people to recognize deepfakes, for example, always double-check the source of the content before believing or sharing it; in case of doubt, directly contact the person or institution supposedly behind the message; create a family secret code to verify identity during a suspicious call; never send money or sensitive documents to someone met only online, etc.,” says T. Stamulis.
Lastly, according to cybersecurity expert, we must also prioritize fostering critical thinking and continuously improving advanced malicious deepfake detection technology.
Methodology and sources
This study used data from the AI Incident Database and Resemble.AI to create a combined dataset covering deepfake incidents since 2017. Incidents were included if they involved the generation of fake videos, images, or audio and were covered by media articles. These incidents were categorized into fraud, explicit content generation, politically charged content, and miscellaneous. For deepfake incidents related to fraud where a financial loss was clearly reported in the article, each case was further classified into one of nine specific fraud subcategories.For the complete research material behind this study, visit here.
ABOUT SURFSHARK
Surfshark is a cybersecurity company offering products including an audited VPN, certified antivirus, data leak warning system, private search engine, and tool for generating an alternative online identity. Recognized as a leading VPN by CNET and TechRadar, Surfshark has also been featured on the FT1000: Europe’s Fastest Growing Companies ranking. Headquartered in the Netherlands, Surfshark has offices in Lithuania and Poland. For information on Surfshark’s operations and highlights, read our Annual Wrap-up. For more research projects, visit our Research Hub.

