The State of AI in Business 2025 report from MIT-affiliated researchers dropped a bombshell: 95% of enterprise AI pilots fail to deliver measurable returns. Markets flinched, pundits cried “bubble,” and skeptics piled on. But when you unpack the methodology and the findings, the picture is far less alarming and much more familiar to anyone who has lived through previous waves of enterprise technology adoption.
Weak Data, Big Conclusions
Built on 153 conference surveys, 52 interviews with undefined “stakeholders,” and 300 press releases posing as “AI initiatives,” the methodology is thin. No clarity on respondent authority, success metrics, or KPIs. Were these C-suite leaders or peripheral staff? The report doesn’t say, dressing up anecdotes as authoritative stats. This data’s directional, not definitive.
Why Pilots Stall
Anyone who’s worked in enterprise tech understands why AI pilots hit roadblocks. Enterprises operate on rigid quarterly budgets and annual planning cycles, with risk-averse cultures that reward stability over bold moves. Even minor API updates can drag on for months, and reorgs or shifting priorities often halt initiatives in their tracks.
And here’s the bit the MIT report missed: pilots aren’t meant to be perfect. They’re meant to teach you. Fail fast in a NayaOne sandbox and you lose a little. Fail in production and you burn millions. The smart move isn’t avoiding failure - it’s lowering the cost of learning. That’s what deliberate, strategic execution looks like.
What's Really Going On
The data tells a familiar story. Consumer tools like ChatGPT are everywhere - 80% of companies have tried them, 40% have rolled out something. Custom enterprise AI is harder: 60% evaluated, 20% piloted, just 5% in production. Shadow AI shows the gap - 90% of employees use personal AI tools daily, while only 40% of firms have formal LLM subscriptions. This isn’t a bust; it’s the rough early stage, like cloud or SaaS when 70 - 80% of early tries didn’t stick.
Experimentation Fuels Adoption
Adoption comes from letting teams explore. Give people choice and exposure. Goldman Sachs had staff test three LLMs to see what worked. One of our fintech partners uses our platform to let tech and business teams try no-code/low-code tools, building AI agents at their own speed. This hands-on approach builds confidence - people test, learn, and push for AI’s value. In fintech, this is driving results in fraud detection and compliance.
Speed Sets You Apart
In fintech, speed is everything. Companies that started AI 18 months ago are seeing gains - streamlined operations, real savings. Those stuck in year-long code reviews or risk debates are lagging. AI-native apps are generating $18.5 billion a year, with back-office automation cutting $2 - 10 million in costs for document processing and support. That’s not hype; that’s impact.
Vertical AI Delivers
Generic AI models fall short. Industry-specific AI wins by focusing on:
- Deep Knowledge: Understanding fintech processes like KYC or risk modelling.
- Workflow Fit: Integrating into existing systems like CRMs or compliance tools.
- Data That Builds: Proprietary datasets that get more valuable over time.
Sales and marketing take ~50% of AI budgets, but operations bring the returns: finance tools reduce fraud by 20 - 30%, and retention improves in high-touch areas. With 42% of firms dropping projects in 2025 (up from 17% in 2024), the edge goes to those who act fast and target real problems.
Four Winning Patterns We See from the Field
- Run more proof-of-concepts. Like VC math, assume many won’t deliver but the few that do can transform the business. Diversify experiments so you don’t bet everything on a single initiative.
- Intense bursts by small teams. Hackathons and tiger teams move faster and validate use cases before scaling into the wider organisation.
- Go top-down and bottom-up. Leadership sets focus, governance, and trust, while front-line teams drive experimentation and surface practical use cases. Both are essential.
- Continuously learn and adapt. Treat AI adoption as a series of sprints, with each iteration refining models, processes, and ROI.
The Bigger Picture
MIT’s 95% stat is more noise than signal. It misreads pilots as endpoints instead of learning opportunities. AI isn’t a bubble - it’s a long-term shift, hitting the same hurdles cloud and SaaS did. Focus on learning efficiently, let teams experiment with tools, and move quickly but responsibly. At NayaOne, we see this in action - early experiments lead to fintech breakthroughs. Don’t overthink it.
Start Experimenting Today
Don’t sit on the sidelines overthinking risks. Launch a pilot, let your teams test AI tools - LLMs, no-code platforms, whatever fits - and learn what sticks. Speed matters: competitors are already banking value while you’re debating. Focus on vertical AI that leverages your data and workflows. Click below to connect with NayaOne and start testing in our secure sandbox - turn insights into action and stay ahead.