The Commonwealth Bank of Australia is developing its own generative AI-powered chatbots to test how different types of customers might respond to new products or "messaging".
The work, unveiled at the South by Southwest Sydney 2023 last week, is in a "preliminary" phase, aimed at determining how well the AI can mimic customers behaviour, and how effective the chatbots are when used as an early-stage experimentation tool for rapid testing.
“What we're doing at CBA and the way we're thinking about this, is as a way to create simulations to help deliver innovation at scale," CBA chief decision scientist Dan Jermyn told iTnews.
“Imagine you create a new persona through a generative AI by saying, 'You are a person who is so-and-so years old and you work in this type of industry', and then you create multiple versions of those.
“What you're able to do then is create interactions between them that create a simulation about how real people might interact.”
The AI-based personas aren't intended to replace the “huge amounts of market research” the bank already does with customers directly, Jermyn said.
“Where we think this capability is really exciting is that it provides us with an earlier stage way of thinking about the ways to create innovation that will eventually get in front of customers.
“It's a way to use generative AI to create scenarios to try out new ways of thinking about the products, services or events that might happen and really to experiment with how that might unfold in a safe and scalable way using generative AI."
This will then shape the live market research, he said.
CBA said in a statement that, through the preliminary study, the bank has so far built "various customer personas that can raise concerns, ask questions and identify issues, just like regular humans."
The bank sees the capability as being particularly useful for simulating customer responses in "challenging situations where [face-to-face] customer research is typically more difficult."
“We’re looking at harnessing generative AI to understand what products and services may be most needed during different types of natural disasters by simulating the actions and needs of customers during these difficult times,” Jermyn said.
“We are also looking at how we can use generative AI to better understand what messaging would be most effective for helping customers in vulnerable situations – such as when customers are potentially being scammed, or when they experience a loss in the family.”
Jermyn noted that "in some instances, large language models are quite closely reflecting what you would expect humans to say or do or how they would behave based on decades of research."
“But in other small ways, when we change the question slightly, you see very different behaviours," he said.
Jermyn said the bank's initial findings show “the actual underlying large language models are very important in terms of the way that you unpick them, understand them, and create the capability to be transparent about how they're used”.
“Otherwise, you start to see divergence between the foundational models and the way humans really behave.”
Australian representation in LLMs
He added the interesting part of the experiment was noting how average Australians are represented in patterns in large language models can differ from the models constructed from the global population.
This provides a way to understand how “well Australians are effectively represented in the AI that's being developed” and highlights the importance for CBA to create research that ensures “AI is brought safely to scale across Australia specifically”.
Its work on generative AI is held within the CommBank Gen.ai studio which launched in May this year, and allows CBA to “adapt any of the models that are out there on the market and use them in a way that is bespoke to our own requirements”.
CBA is not limited itself to using one form of generative AI or LLM.
“That's an important part of the process, because one of the hypotheses here is that we may see different results depending on which language model we use. Some of them may be great at certain things, but less good at other things," he said.
“We may want to use one from a particular supplier for a particular type of use case or not.
“It's been a critical part of the way that we've thought about AI, really for a long time, is to make sure that we're able to adapt and use it in a way that's safe, scalable and transparent.”