As artificial intelligence continues to evolve, 2025 promises to bring both exciting advancements and alarming challenges, according to Michael Lieberman, CTO and cofounder of Kusari. Lieberman predicts a surge in covert attacks on AI systems, including the use of malicious models and data poisoning campaigns. With many organizations relying on free, pre-trained models with limited transparency, the AI supply chain faces significant risks. Unless the industry adopts more robust defenses, it may take a high-profile breach to spur meaningful action.
- AI will become both more widespread and harder to detect. A significant concern lies with free models hosted on platforms like Hugging Face. We’ve already seen cases where some models on these platforms were discovered to be malware. I expect such attacks to increase, though they will likely be more covert. These malicious models may include hidden backdoors or be intentionally trained to behave harmfully in specific scenarios.
- Data poisoning attacks aimed at manipulating LLMs will become more prevalent, although this method is likely more resource-intensive compared to simpler tactics, such as distributing malicious “open” LLMs. Most organizations are not training their own models; instead, they rely on pre-trained models, often available for free. The lack of transparency regarding the origins of these models makes it easy for malicious actors to introduce harmful ones, as evidenced by the Hugging Face malware incident. Future data poisoning efforts are likely to target major players like OpenAI, Meta, and Google, which train their models on vast datasets, making such attacks more challenging to detect.
- In 2025, attackers are likely to outpace defenders. Attackers are financially motivated, while defenders often struggle to secure adequate budgets since security is not typically viewed as a revenue driver. It may take a significant AI supply chain breach—akin to the SolarWinds SUNBURST incident—to prompt the industry to take the threat seriously.

