Tradeflock Asia

Kishan Sundar  

CTO, Maveric System

By bringing 22+ years of expertise in digital transformation and BFSI strategy. Kishan champions "transformative leadership," integrating Generative AI and edge computing to re-architect the SDLC. His focus on customer-centric, secure technology drives Maveric's global expansion and revenue growth.

For the last few years, banks globally have been in a race to automate customer interactions. The rapid rise of generative AI accelerated this push, with a clear objective: faster service, lower operating costs, and greater personalisation at scale. We saw conversational interfaces that could draft responses instantly and decision engines that promised to anticipate customer needs before they were expressed.

But the early excitement is now giving way to a more measured phase. Business leaders are no longer focused only on what AI can do. They are asking a more fundamental question: can these systems be trusted in live, customer-facing environments?

What we are seeing is a clear recalibration of digital customer experience strategies across banking. The emphasis is shifting from speed to reliability, from automation to accountability. Customer experience, particularly in financial services, cannot be sustained by technology alone. It is sustained by trust.

Two converging forces drive this shift.

First, customers are skeptical. A 2024 report by Salesforce found that 68% of customers say advances in AI make it more important for companies to be trustworthy (Salesforce, 2024). In banking, trust is everything. When an AI-driven interaction provides incorrect guidance on a loan, a charge, or a disputed transaction, the impact goes beyond a technical error. It directly affects confidence in the institution.

Second, supervisory expectations are becoming clearer. Regulators globally have reinforced that automation does not dilute accountability. Supervisory guidance increasingly emphasises transparency, explainability, and clear responsibility for customer outcomes, especially where automated systems influence decisions.

Lending offers a useful illustration. In traditional systems, a declined application could be explained through visible criteria and human judgement. Early AI models, however, often produced outcomes that were statistically valid but operationally opaque. That opacity is no longer acceptable in regulated environments.

As a result, banks are investing more deliberately in explainable AI. This is not a cosmetic feature. It is an engineering requirement. If an AI system declines a credit limit increase or flags a transaction, it must be able to surface the underlying factors in a way that is consistent, traceable, and understandable to both customers and internal teams.

We are also seeing a pullback in how complaints are handled. The dream of fully automated grievance redressal is fading. Banks are realizing that AI lacks empathy. A machine can process a fraud alert in seconds, but it cannot calm down a panicked senior citizen who thinks they have lost their savings.

The trend shifts toward a “human-in-the-loop” model where AI handles sorting issues and data prep, but humans make final decisions on sensitive matters. Capgemini’s World Retail Banking Report 2025 notes 70% of bank executives favor copilots, yet many customers demand human interaction in high-stakes moments (Capgemini, 2025). Building safety checks is costly and slow, risking competitors who rely solely on AI. Yet, trust isn’t a regulatory hurdle but a core product attribute. Banks that build accountability and clarity into their tech will sustain customer confidence, emphasizing that AI should enhance, not replace, customer relationships.