Precision Synthetic Data for Unmatched AML Standards

Achieve faster compliance, reduce risk, and enhance detection with our advanced synthetic data solution designed for rigorous financial compliance.

Overcoming the Expertise Barrier: Rethinking Artificial Intelligence Adoption in Financial Services

It’s no secret that financial institutions have been investing heavily in artificial intelligence. From fraud detection to hyper-personalised customer experiences, the potential is widely acknowledged. But there’s a sticking point—turning potential into reality has proven harder than anticipated.

While proof-of-concepts and pilots may generate buzz, scaling these efforts across complex organisations is where many stumble. There’s no shortage of intelligence or intent. So, what’s actually getting in the way?

A recent LinkedIn post by Conor Grennan, referencing a 2006 Harvard Business Review article, offers a surprisingly human explanation: expertise itself can be a barrier. If that sounds paradoxical, let’s unpack it and explore how platforms like NayaOne can shift the dynamic.

The curse of knowledge—why expertise can slow AI adoption

In the 1990s, an experiment by Stanford graduate Elizabeth Newton had participants tap out the rhythm of popular songs and ask others to guess them. Tappers assumed the listeners would recognise the tunes quickly. They rarely did. The tappers, armed with all the context, couldn’t understand how others could miss it.

This is what’s known as the “curse of knowledge”. Once we know something, it becomes difficult to imagine not knowing it. And this is playing out right now in boardrooms and innovation labs across the financial services world.

AI-savvy professionals—data scientists, engineers, innovation leads—see the possibilities clearly. But when communicating those possibilities to business stakeholders or risk teams, there’s often a disconnect. It’s not a lack of willingness. It’s a gap in language, context, and mental models.

This isn’t just a theoretical problem. It has real-world implications. Misalignment between technical and business functions slows decision-making, creates friction, and can lead to costly reworks. In some cases, initiatives stall altogether—not because the technology failed, but because the understanding wasn’t there to support it.

And that’s a big problem. Because artificial intelligence adoption isn’t a solo mission. It requires buy-in, trust, and coordinated effort across departments that don’t always speak the same language.

Why internal evangelism isn’t enough

Every major organisation has a few AI champions—people who are excited about the tech, who experiment, share insights, and push things forward. They’re invaluable. But they can’t carry the weight of adoption on their own.

Too often, we see a well-meaning strategy: get the experts to teach the rest. But here’s the issue. Experts aren’t always the best educators. The curse of knowledge makes it hard to meet others where they are. Even with slide decks and lunch-and-learns, the core understanding doesn’t always land.

Moreover, financial institutions operate in tightly regulated environments. There’s less room for experimentation, more scrutiny, and a higher bar for validation. In this setting, informal knowledge-sharing won’t cut it.

Even when people want to learn, the formats aren’t always accessible. Lengthy documentation and theoretical frameworks don’t resonate with someone trying to assess a use case’s real-world feasibility or compliance impact. People need exposure that’s contextual, practical, and low-risk.

This is where structured, repeatable approaches to upskilling and experimentation become vital. Teams need access to real tools, data, and use cases—something more than theory and a couple of whitepapers.

Bridging capability with experience through sandboxing

One of the most effective ways to overcome the expertise gap is by creating safe, practical environments where teams can build, test, and learn without high stakes. Sandboxing—a controlled space for experimentation—has emerged as a powerful method for achieving this.

In these environments, different functions across a financial institution can engage with AI-driven solutions in a hands-on way. A product team might prototype an AI-powered KYC process to explore how it integrates with current systems. Compliance officers can examine data flows and assess regulatory implications. Risk managers can simulate real-world scenarios and evaluate outcomes with minimal overhead.

What makes this approach effective is that it shifts understanding from theory to practice. Instead of relying solely on documentation or internal presentations, teams experience the technology directly. They develop a grounded understanding of how it works, what it’s capable of, and where it might fall short.

Sandboxing also shortens the feedback loop. Solutions can be tested and iterated in days rather than months, bypassing lengthy procurement processes or integration delays. This fosters a culture of experimentation—one that doesn’t just talk about innovation but actively enables it.

Crucially, this kind of setup helps reposition artificial intelligence adoption as a shared responsibility. It’s no longer confined to technical departments or innovation teams. It becomes something that risk, legal, operations, and business units can all meaningfully participate in and influence.

Designing for behavioural change, not just technical implementation

One of the key takeaways from Conor Grennan’s post is that knowledge alone doesn’t drive change—behaviour does. And behaviour isn’t changed by presentations. It’s changed by stories, experiences, and shared understanding.

In the context of AI adoption, that means less focus on proving the tech works and more on showing people how it changes their work. What decisions will be faster? What manual tasks will disappear? Where does human judgement still play a critical role?

At NayaOne, we’ve seen that financial institutions make the most progress when they combine experimentation with narrative. That might look like internal showcases, cross-functional sprints, or collaborative exploration of a particular challenge using AI tools from our fintech marketplace.

We also emphasise the role of “champion pairs”—pairing an AI expert with a business stakeholder in short projects. This helps demystify the technology and creates shared accountability. It’s a simple shift in structure that makes a big difference in how knowledge is absorbed and acted upon.

And because the platform provides transparency into data usage, model behaviour, and integration paths, it helps address concerns early. This reduces resistance and allows people to build new habits around how they evaluate and adopt emerging technologies.

Behavioural change might not be as flashy as deploying a new model, but it’s what ensures those models actually get used—and deliver value.

How NayaOne accelerates meaningful artificial intelligence adoption

There’s no doubt that artificial intelligence will reshape financial services. But getting there isn’t just about hiring data scientists or signing off on a new platform. It’s about equipping people, across roles and levels, with the tools and experiences that make AI feel accessible, relevant, and trustworthy.

It also means pairing the right technology with the right guidance. That’s where digital transformation consulting plays a crucial role—helping institutions align strategic goals with practical implementation and ensuring that innovation isn’t siloed but shared.

We don’t just focus on infrastructure. We build ecosystems where experimentation leads to understanding, and understanding leads to action.

By tackling the human challenges that come with technological change, like the curse of knowledge and behavioural inertia, organisations can move from good ideas to scalable, meaningful impact.

Artificial intelligence adoption isn’t a technical problem. It’s a collective challenge. And it’s one we’re ready to solve—together.

Get in touch with us

Reach out for inquiries, collaboration, or just to say Hello!