Ron Reiter, co-founder, CTO of Sentra and Eoin Hinchy, CEO and co-founder of Tines wanted to pass along a few predictions for 2025.
Eoin Hinchy predictions:
The gap between AI’s promise and reality will widen without better integration
AI’s progress will continue, but unless we bridge the gap between promise and reality through workflows, most companies won’t see the returns they’re hoping for.
In 2024, we’ve seen the “delta” between AI’s promise and the reality of putting it to work in an enterprise. Foundational models are getting smarter all the time, but they’re only as good as the data they can access. In order to see this gap close in 2025, they should be trained on proprietary business data—the real goldmine that makes a company successful. This is what keeps AI from fully realizing its potential in enterprise settings.
The true opportunity, then, is in using workflows to close this gap. The gap between AI’s promise and its real-world value lies in seamless integration with workflows. That’s exactly what we’re seeing with clients who use workflows to connect AI to their systems. The future of enterprise AI isn’t just about making AI smarter, but about making it relevant through integration.
Transparency will be the key to building trust in AI
Transparent workflows will be essential to making AI trustworthy over the next year, allowing people to look “under the hood” and see how decisions are made.
When it comes to AI trust, transparency is absolutely essential. If users can’t see how an AI solution came to a certain decision, they’re going to be skeptical about letting it into critical parts of the business. That’s why workflows have a huge role to play in giving users a transparent view of each step of the process. If you ask, “What’s our annual recurring revenue (ARR)?” and the AI spits out a number, workflows should let you dig into how that number was arrived at. You’d be able to see which workflow ran, the query made in Salesforce, and the raw results that came back.
Transparency builds trust, especially in complex environments. For companies investing in AI in 2025, it’s this transparency that makes all the difference between a tool that’s useful and one that’s just a black box.
Prompt engineering will become a core skill, not a specialist role
The role of the “prompt engineer” will disappear as everyone learns to interact with AI tools as a basic skill, similar to using Microsoft Excel.
Prompt engineering is not going to stay a specialized role. It’s a bit like how Excel used to be considered a specialized skill, but now most people know the basics. Prompt engineering is just the ability to articulate what you want clearly and effectively—anyone can learn that. It’s not some ultra-technical task that requires a dedicated job title.
Over the next year, prompt engineering will start to feel like an essential skill across the workforce rather than a niche role. Models are also improving in their ability to interpret natural language, which makes prompt engineering even simpler. Soon, I think we’ll all be doing it as part of our regular jobs.
AIOps teams will rise, but fully autonomous AI is still years away
In 2025, we’ll see the rise of dedicated AIOps teams to manage AI operations, while the appeal of “agentic AI” (AI acting independently) won’t pan out as fast as some people are predicting.
As companies adopt more AI, there’s going to be a need for specialized AIOps teams. These teams will handle everything from model deployment to managing quotas and security for AI-driven workflows, sort of like how infrastructure teams manage data pipelines. However, the idea of fully autonomous “agentic AI”—AI that runs without human oversight—will remain a distant reality.
Real-world complexity will likely prevent us to go fully autonomous in the near future. So many situations require a human in the loop, especially when there’s potential for errors. We’ve seen the most success when AI works with humans, each complementing the other.
Autonomous AI without human checks has led to more problems than solutions. So, for now, a balanced approach where humans and AI work together is where we’re going to see value.
The push for measurable ROI on AI investments will intensify
In 2025, it will no longer be enough to just “adopt AI”—companies will need hard ROI metrics to prove its value.
We’re now a couple of years into the generative AI boom, and I think it’s fair to say that the technology hasn’t yet lived up to its hype. CIOs and CTOs will demand concrete metrics before approving new AI investments. Going forward, companies are going to need hard ROI to justify spending on AI tools. Metrics like “80% of code now touches AI” or “50% of customer queries are resolved by AI” are going to be essential.
It’s no longer enough to just demo an AI solution and assume it will add value. We need quantifiable outcomes. And the companies that can show hard data on cost savings or productivity gains are the ones that will actually see AI succeed in their business.
Ron Reiter’s predictions:
Data security’s metamorphosis
In 2025, we’ll see a significant shift from standalone Data Security Posture Management (DSPM) solutions to comprehensive Data Security Platforms (DSP). These platforms will integrate DSPM, Data Access Governance (DAG), Data Detection and Response (DDR), and Data Loss Prevention (DLP) capabilities. This evolution is driven by the increasingly complex data environments and the need for a more holistic approach to data security across multi-cloud and on-premises environments. Additionally, the critical role of data in AI and LLM training requires holistic data security platforms that can manage data sensitivity, ensure security and compliance, and maintain data integrity. This consolidation will improve security effectiveness and help organizations manage the growing complexity of their IT environments. DSPs will become a critical component of business operations, directly influencing strategic decisions and enabling faster, more secure innovation.
AI revolutionizes data classification*
Data classification is one of the first significant data security problems AI can effectively solve. The ability of AI to accurately classify vast amounts of data will help organizations better manage sensitive information, reduce false positives and negatives, and improve overall data security posture. This advancement will be crucial as data volumes and complexity continue to grow. AI-driven classification systems will become sophisticated enough to understand context and intent, not just content, leading to more nuanced and accurate data protection measures especially for challenging unstructured data sources. And we’ll see longstanding data governance and compliance challenges solved, enabling organizations to automate many previously manual and error-prone data protection aspects.
Personalization and data security accelerates*
In the coming year, we’ll continue to see an increase in the personalization of services across industries like healthcare, retail and financial services. This trend will continue to generate enormous amounts of data, creating more significant security challenges. Organizations must balance the demand for highly personalized experiences with robust data protection measures. This will give rise to innovation in secure data handling and privacy-preserving technologies. New technologies, such as federated learning and homomorphic encryption, will emerge, enabling advanced personalization without compromising individual privacy. These advancements could reshape how businesses approach customer data, allowing them to provide highly personalized services while maintaining strong data protection standards.
AI is a double-edged sword in cybersecurity
In 2025, AI will be both an offensive and defensive force in cybersecurity, each side pursuing control over critical data. Deepfake-related losses are expected to soar from $12.3 billion in 2023 to $40 billion by 2027, as attackers increasingly leverage AI to create more sophisticated threats. In addition to deepfakes that challenge traditional authentication methods, other AI-powered attack techniques are emerging such as autonomous malware, social engineering, data exfiltration, and credential stuffing are significantly harder to detect. This will lead to an intensifying arms race between attackers and defenders, with AI at the center. This AI-driven evolution will fundamentally change cybersecurity, forcing organizations to rethink security strategies and invest heavily in AI-powered defense mechanisms to streamline security processes and detect threats faster This dynamic has already begun to emerge in 2024, marking the first steps of an intensifying arms race centered on AI-driven strategies. As organizations adapt, new ethical questions will surface, especially around securing training data and AI autonomy in making security-critical decisions.
Compliance and regulation in data protection increases*
The interdependence of data for AI development will only intensify the ongoing rollout of data regulation frameworks, especially amidst the growing debate around AI regulation. This will drive enterprise compliance efforts, making data security platforms essential for organizations to keep up with the rapidly changing regulatory landscape. As expected, there will be more lawsuits and fines related to data breaches and non-compliance, making it imperative for organizations to navigate the complex regulatory environment. While staying compliant will continue to be a significant challenge for businesses, it could also lead to the rise of AI-powered tools that automatically adapt to new regulations. Organizations that effectively leverage technology like AI-powered data security to maintain compliance will gain a competitive advantage in the projected heavily regulated future.