How Truist Built Auditable AI Foundations

Insights from an interview series with Karan Jain and Sanjay Sankolli (Truist).

Enterprise AI isn’t failing because the models are weak. It’s failing because most organisations lack the foundational data architecture required to support rapid deployment while satisfying rigorous compliance and audit demands.

In the latest CDO Executive Spotlight, Karan Jain – CEO at NayaOne and Sanjay Sankolli – Head of AI & Data Architecture at Truist – break down exactly how one of America’s largest financial institutions is solving this challenge.

Watch the conversation with Sanjay Sankolli below (Part 1).

The Core Friction in Financial Services

CDOs, CTOs, and Heads of AI in banking and insurance face the same tension every day: stakeholders demand fast AI innovation, while regulators, auditors, and risk teams require ironclad traceability, explainability, and control.

Traditional approaches create two extremes:

  • Move fast → high risk of compliance failures
  • Stay compliant → slow innovation and missed opportunities

Truist’s approach bridges this gap by treating data foundations as the critical enabler of both speed and auditability.

Key Lessons from Truist’s Playbook

1. Treat AI as an Operating Model Transformation – Not Just a Technology Project

Sanjay emphasises that successful AI initiatives require changing how the organisation operates, not just deploying new models. This starts with aligning business, tech, risk, and compliance teams from day one.

2. Build Resilient, Auditable Data Foundations

Instead of bolting governance onto existing systems, Truist focuses on modern data architecture that makes auditability a built-in feature rather than a bolt-on afterthought. This includes:

  • Turning fragmented data ecosystems (often the result of years of M&A and legacy systems) into decision-ready, traceable intelligence.
  • Creating high-fidelity environments that mirror real production constraints during evaluation.
  • Embedding governance guardrails early in the process.

3. Focus on Augmentation and Workflow Acceleration

The clearest wins today are not in full autonomy but in human-AI collaboration:

  • Front office: Customer service deflection and predictive servicing
  • Middle office: Enhanced fraud detection, KYC/AML, and claims triage
  • Back office: Document intelligence and process automation
  • Horizontal: Significant gains in developer productivity

These use cases deliver measurable bottom-line impact while operating safely within regulated boundaries. In Part 2, Sanjay dives deeper into the highest-ROI use cases across front, middle, and back office:

 4. Close the Pilot-to-Production Gap with Evaluation Fidelity

Sanjay stresses running pilots that reflect actual enterprise conditions – messy data, integration complexity, latency requirements, and governance realities – rather than sanitised test environments.

Market Patterns: Where Leading Financial Institutions Are Finding Traction

Across banking and insurance, a consistent pattern is now visible in the data. Leading institutions are moving beyond isolated pilots by treating auditable data foundations as the non-negotiable bridge between innovation velocity and regulatory rigor. McKinsey’s 2025/2026 State of AI research shows financial services organisations leading in responsible AI maturity, yet the majority still face a pronounced governance and scaling gap – particularly with agentic AI workflows. High performers, however, are achieving measurable results by embedding traceability, explainability, and risk controls from day one.

The highest-ROI use cases cluster in three areas:

  • Risk and compliance (real-time fraud detection, automated KYC/AML, and regulatory reporting with built-in audit trails),
  • Customer and middle-office workflows (personalized servicing, claims triage, and credit-risk memo generation), and
  • Operational acceleration (document intelligence and developer productivity gains of 30 – 50 %).

These deployments share one common trait: they treat auditability not as a compliance tax but as a structural advantage that shortens time-to-value and reduces regulatory exposure. This is precisely the pattern Truist has operationalised at enterprise scale – and it explains why a growing cohort of CDOs and CTOs are now prioritising resilient data architecture over pure model sophistication. 

Sanjay continues in Part 3 with practical guidance on evaluation fidelity, governance guardrails, and closing the pilot-to-production gap:

Leadership Lessons from Scaling AI at Truist

In the final part of the series, Sanjay Sankolli shares key insights on the operating model shifts required, how to position governance as an enabler rather than a blocker, and practical advice for CDOs and AI leaders navigating this transformation at enterprise scale.

Closing the Gap Between Ambition and Production

The organisations that will lead in the next wave of financial services AI are those that treat robust, auditable data foundations not as a compliance checkbox, but as the strategic enabler of sustainable scale. Truist’s experience demonstrates that when architecture, governance, and operating models are aligned from the outset, the pilot-to-production journey becomes predictable rather than painful.

At NayaOne, we help CDOs, CTOs, and AI leaders replicate this success by providing secure, high-fidelity evaluation environments that mirror the complexity of real regulated production systems. Using isolated digital sandboxes, synthetic data libraries, and structured vendor validation workflows, institutions can test, de-risk, and accelerate AI initiatives – all while maintaining full auditability and governance control.

Ready to close your own pilot-to-production gap? Request a Demo to see how leading financial institutions are building the resilient foundations required for enterprise AI at scale.

Get in touch with us

Reach out for inquiries or collaborations