Australia’s
leading IT projects

For AI to be effective, it must be trusted and meaningful

By

A fuller picture is emerging about why AI projects fail, and how to remedy that.

For all the excitement about artificial intelligence (AI), it has had mixed success in business and government – so much so that Gartner previously predicted only 20 percent of projects would deliver business outcomes through 2022.

For AI to be effective, it must be trusted and meaningful

Yet, Australian organisations continue to make use of AI in response to rising consumer expectations and to help enable faster and bigger digital projects.

How can they use AI more successfully?

Much has been said about the importance of earning users’ trust in AI systems. This is one aspect of a broader story, according to Jon Stone, a Partner with KPMG’s Digital Delta practice in Australia.

He argues organisations often focus more on the technical aspects of AI, than on developing AI in a trusted and meaningful way. He says the AI development process should start with organisations identifying the value they want from AI. They should also take into account human factors and usage of AI when beginning to design algorithms, Stone says. These are not topics he typically hears organisations discuss when they first look at using AI.

“The first thing we often see organisations say when starting out on this journey is, ‘we need an AI strategy’, or ‘we need to appoint someone to be responsible for AI’, as though AI is an outcome in itself,” Stone says.

“But it’s not. AI only creates value when it is used – to change a decision, improve a process, take action or shape an experience. You should be asking ‘how do we design the processes, policies and capabilities we need to use the outputs of the algorithms in a meaningful, trusted, effective way?’.”

Foundations of trust

A 2020 survey by KPMG found that 45 percent of Australian respondents said they would not willingly share their data with an AI system. And 40 percent said they wouldn’t trust the output of an AI system.

The reasons for this may be more nuanced than some may expect.

We learned more about them when the University of Queensland published a paper about AI trust challenges in January 2021. It points to a range of issues, from transparency, accuracy and reliability, to whether making AI too human-like could lead people to over-estimate its abilities, and the risks and benefits of different levels of AI automation.

Understanding of these issues is evolving.

For example, the way an AI agent acts can influence trust. The University of Queensland paper mentions a study which found participants were more willing to share private data with an AI agent and were more confident that the agent would respect their privacy when it could move around “naturally” and speak, compared with a static agent that could speak.

A study also found that financial investors trusted fully automated artificial advisors more than human advisors. Other research indicates that over-reliance on AI systems tends to be experienced by novices.

Theories have developed about levels of transparency required.

“For certain applications where the risk is low, explainability may not be that difficult,” says Nicole Gillespie, professor of management and leader of the University of Queensland team responsible for the AI trust paper.

“But where AI is being used to make decisions that have consequential outcomes, there is a moral obligation to understand this before putting them in a situation where if they’re wrong, it would be harming stakeholders,” she says.

Gillespie’s team has examined how organisations can deal with these issues.

“We have mapped out what needs to happen, but it’s easier to map it out than to do it in practice,” Gillespie says. “You need to make sure that all of this is happening in an integrated way in the organisation, and that the right people are engaged.”

This may require cross-functional teams, AI ethics boards and new structures and roles to ensure the various risks and vulnerabilities are understood, according to Gillespie.

She sees these steps giving organisations a competitive advantage and helping them to avoid “trust failure”.

Low tolerance for ambiguity

Healthcare organisations around the world are among those grappling with these issues.

“You can’t just set up software in front of somebody and say ‘trust me’,” Christina Silcox, a policy fellow for digital health with the Duke-Margolis Center for Health Policy, noted during a CES 2021 panel session, “particularly if the result may be to make decisions based on that information that are going to be critical to the patient.”

“Patients are relying on the healthcare workforce to make sure that these tools are meeting their needs, and it comes down to an issue of transparency,” Jesse Ehrenfeld, chair of the American Medical Association board of trustees, agreed.

“Understanding how these technologies are developed, what patients and populations we know they work in, and where they may not work or haven’t been demonstrated to work, is at the root of that. It becomes fuzzy really quickly when you’re talking about AI.”

These issues are hardly unique to healthcare: organisations in every sector deal with challenges related to trust in technology and customers’ trust.

Solving them is crucial to long-term acceptance of AI and to unlock its value, noted Pat Baird, a Philips senior regulatory specialist.

He argues that most companies have the tools to do this – these include processes for things like quality assurance, auditing to international regulations, managing data privacy concerns, and other risk-management activities.

“We just haven’t thought about how to customise them for AI,” he says.

Finding meaning

Stone believes that meaningful AI is best achieved with a ‘consumption-based approach’ – first identifying a business problem, then considering “is this a process or activity that is better suited to AI or ML than a human, as opposed to just building some algorithms hoping they may be relevant or useful.”

“Our approach is to start with outcome and value and work back to the algorithm and the data,” he says. “You need to embed the outcomes of AI, ML or analytics into your process, because that’s where the value is created – so it’s important to design how you will do this.”

This approach can provide much more context for the use of AI, enabling micro-decisions about how AI is used. For example, credit card owners may be happy to receive automated notifications about suspicious transactions at any time of the day. But they could be far choosier about how often and at what time of the day they are willing to accept automated marketing messages.

Efforts to design AI systems to take into more contextual factors are increasing, Stone says. “But if you start with the algorithm, rather than the usage, the risk is they become misaligned, and less effective,” he says.

Companies can also gain a degree of control by choosing to deploy AI for discrete applications with narrower scope – such as within a bank reconciliation or expense process.

It’s also important to set customers’ expectations about AI, particularly for more expansive applications where more AI-learning will be required.

“We need to educate people to understand that, when we deploy AI, it is going to make mistakes. And in fact, that's the way it gets better. AI has to learn, it has to be taught. And I think that's not well understood.

Designing a way to enable safe learning, within legal and regulatory boundaries, is a challenge – rather than trying to get all these AI algorithms or bots or new technologies to be perfect from day one,” Stone says.

A small step in this direction might be designing chatbots to tell users they are learning and to “please be patient with me”. “That's just a little thing, but I think it’s the sort of mindset we've got to permeate in other areas – that these things won't be perfect from day one,” Stone says.

“Ultimately, when people start to feel outcomes from AI are adding value to their lives, consistently and repeatedly , it builds trust and becomes more meaningful,” he says.

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Log In

  |  Forgot your password?