Opinion

Opportunities and considerations for AI adoption in banking

 
Revolutionised by the rapid growth and ubiquitous nature of the digital economy, the banking sector is embracing a new era of generative artificial intelligence (AI) systems built around large language models (LLMs). Their application in financial services is transformative, redefining customer interactions and streamlining operational processes.

Based on algorithms, LLMs process natural language inputs – as is the case with platforms such as ChatGPT or Google’s Bard. Trained on trillions of word combinations, LLM systems can learn and generate text. They can also participate in two-way conversations with humans, making it possible for banks that use them to deploy generative AI solutions to answer questions, recommend services and advise on how their customers can manage their financial affairs.

Redefining the banking experience

The ability of LLM systems to engage directly with customers in two-way dialogue represents a paradigm shift in the customer experience. The core outcome is hyper-personalisation. By merging customer data with generative AI systems, banks now have the ability to create a bespoke experience in real time that reflects the customer’s unique needs, preferences, and behaviours. For corporate and commercial banking clients, this means being able to access tailored financial services that meet.

Despite the overwhelmingly positive potential, generative AI presents banks, businesses, and policymakers with significant challenges. With very little effort, cybercriminals can use generative AI to create highly convincing scams – including credible text, images, and deep fake audio to fake a customer’s voice. The potential for cybercriminals is vast – since the fourth quarter of 2022, there has been a 1,265% increase in malicious phishing emails.



Security and governance

Generative AI can also be used to damage an organisation’s reputation or operations and be used for nefarious purposes such as unjust bias or the infringement of copyrights and intellectual property. For banks, generative AI also complicates the potential for data leaks because LLM systems are designed to ingest, learn, repurpose, and regurgitate data. This raises questions about data handling and regulatory compliance, which governs the storage and processing of data.

Banks will also be cognisant of the risks attached to using public AI platforms such as ChatGPT. Trained using publicly available data, such platforms are inherently more vulnerable to inaccuracies and could also be used to train models used by competitors. This further raises the potential for data exposure.

To mitigate risks associated with LLM systems, banks must take a multi-pronged approach to ensure maximum security, regulatory compliance, and good governance. Implementing a ‘human-in-the-loop’ approach for outcome validation introduces human oversight. This means that banks can ensure regulatory compliance, ethical standards, and data accuracy. It also makes it possible for banks to ensure that generative AI outcomes are in alignment with the organisation’s values, ethics, and sustainability principles. Managers should also incorporate LLM risk into their overall environment, social and governance (ESG) standards, risk management frameworks and reporting protocols.

Striking a balance

A bank’s approach to generative AI risk mitigation should also include industry collaboration with peers and thought leaders, as well as strategic partnerships with fintech innovators and proactive participation across the fintech ecosystem. By working with peers, banks can share best practices for data handling and security, collaborate in contributing to the development of standards and pool their respective resources. Furthermore, all banks should embrace regulatory engagement as a fundamental basis for AI digital compliance and risk management.

Internal processes should also integrate best practices so that employees are trained and informed on how to use and navigate AI. At Standard Chartered, responsible AI (RAI) governance principles are followed to ensure that AI-related processes are reviewed and approved by the bank’s RAI Council. Standard Chartered also ensures that all new technologies follow the bank’s technology onboarding processes and comply with relevant standards and controls.

Openness, dialogue, and the sharing of experiences are critical for banks as they strive to strike a balance between innovation and responsibility. All players in the banking sector must work to usher in a secure and client-focused future within the evolving realm of generative AI — a trend that Standard Chartered is committed to and very proud to champion.