What Changed When AI Innovations Were Tested Rigorously
Last week in London, the FCA hosted the showcase for the first cohort of its Supercharged Sandbox on 28 – 29 January 2026. For most of the 23 participating firms (selected from 132 applications), the focus was not on whether to apply AI, but on what changes when systems are tested rigorously over five months under real technical constraints.
Across two days, 20 enterprises presented systems they had been building and stress-testing since October. These were not exploratory pilots. Teams worked in secure cloud environments using high-performance GPU-accelerated compute delivered with AWS, NVIDIA-powered infrastructure and production-grade AI software to surface issues that are usually deferred: edge-case behaviour, explainability trade-offs, data limitations, and operational readiness.
What emerged was not linear progress, but correction. Several teams described rethinking architectures, narrowing scope, or de-risking agentic approaches once systems were exposed to sustained testing. Regulatory proximity did not slow delivery. In many cases, it changed design decisions early and reduced downstream rework.
All experimentation used realistic, non-sensitive datasets, with no interaction with live systems. This removed the safety of abstraction. Instead of debating whether approaches would hold up, teams generated evidence and adjusted course.
What the Supercharged Sandbox Changed
The Supercharged Sandbox builds on the FCA’s Digital Sandbox, provided by NayaOne.
What changed with this cohort was depth.
In most AI programmes, infrastructure arrives late. By then, architectures are fixed and change is expensive. The Supercharged Sandbox inverted that sequence.
Through collaboration with the FCA, NVIDIA, and AWS, teams had early access to production-grade compute, tooling, and data while systems were still fluid. This shifted behaviour. Teams stopped optimising for theoretical feasibility and started testing operational viability under compute limits, data gaps, and supervisory challenge.
The result was more decisive experimentation. Some approaches were deliberately de-scoped. Others were strengthened by early evidence. Regulatory engagement became a design input, not a checkpoint.
Five Months of Applied Testing
The 20-week structure gave teams space to move beyond ideas and into delivery-focused work. A clear pattern emerged around agentic and multi-agent systems – not as abstractions, but as orchestration layers tested under failure conditions.
Teams focused on:
- Stress-testing agent behaviour under edge cases and uncertainty
- Working through data quality and signal gaps
- Embedding explainability, bias controls, and traceability directly into workflows
- Assessing monitoring and operational readiness early
By showcase time, systems reflected iteration and constraint. The emphasis was not novelty, but readiness.
What Access Changed for Teams
Access was the multiplier. With environments, tools, data, and regulatory input available from the start, teams shifted away from logistics and negotiation toward sustained testing, validation, and system understanding.
Experimentation became repeatable and embedded in product development, rather than isolated within innovation cycles.
A Sandbox on Steroids: Enhanced Infrastructure for Real AI Testing
As Matt Lowe, Manager of the FCA’s Innovation Lab, described it during the showcase:
“A sandbox environment on steroids. A sandbox environment with enhanced AI testing infrastructure. We have sparse and rich datasets. We’ve got high-performance compute. We’ve also got developer tools. We’ve worked very closely with NayaOne, as the digital sandbox provider, to ensure firms can onboard and test real tools securely.”
This captures the essence of what made the Supercharged Sandbox transformative: early, production-grade access to everything teams needed – without the usual delays. Building directly on the FCA’s established Digital Sandbox (powered and operated by NayaOne), the Supercharged version added GPU-accelerated compute via AWS and NVIDIA, enriched synthetic/realistic datasets, and streamlined onboarding. The result? Teams could focus on rigorous testing, iteration, and evidence-based decisions from day one, rather than wrestling with setup or infrastructure gaps.
NayaOne’s platform was instrumental here – providing the secure foundation that enabled seamless collaboration between regulators, innovators, and tech partners like NVIDIA and AWS. By handling the heavy lifting of secure environments, rapid vendor integration, and compliance controls, NayaOne turned regulatory proximity into a design accelerator, not a bottleneck.
This close partnership ensured firms could prototype and stress-test agentic AI systems under real constraints (edge cases, data limitations, explainability requirements) in a controlled, non-live setting – proving operational readiness and reducing future rework.
What This Signals for AI in Financial Services
This cohort demonstrated a shift in how AI innovation in financial services can progress:
- Validation moved earlier
- Regulatory dialogue shaped design, not just review
- Decisions were grounded in evidence from meaningful testing
This is difficult to replicate without the combination of infrastructure, supervision, and time. The Supercharged Sandbox shows that regulators can enable responsible innovation not by changing rules, but by changing conditions.
For the teams involved, the signal was clear: when constraints surface early, better decisions follow.
Ultimately, the Supercharged Sandbox is about driving cutting-edge AI innovation into the UK financial services sector – helping firms move faster from concept to viable solutions while staying aligned with regulatory expectations. It positions the UK as a global leader in safe, scalable AI adoption for finance.
If you’re exploring AI testing in Financial Services, building in a digital sandbox, or curious how NayaOne can help accelerate your next innovation cycle – get in touch.




