The Australian public have low levels of trust in Artificial Intelligence (AI), driven largely by a perceived lack of appropriate institutional safeguards, according to a new report from KPMG and the University of Queensland (UQ).
KPMG and UQ have been researching global levels of trust in AI for three years. This third edition of the Trust in AI report calls for more regulation, in particular the need for an independent regulator in Australia.
Partner in charge of KPMG Futures, James Mabbott, told iTnews that many Australians would like to see such a regulatory body established for AI.
“The expectation is actually that there is an independent AI regulator or regulatory body set up in Australia. And I think that's where the big point of difference is, that in general, people expect that we would have that framework in place,” said Mabbott.
KPMG and UQ’s report reveals that institutional safeguards and regulatory frameworks have the biggest impact on the general public’s attitude towards AI.
According to the report, “Given our results show most people are unconvinced that current governance and regulations are adequate to protect people from problems associated with AI, a critical first step towards strengthening trust and the trustworthiness of AI is ensuring it is governed by an appropriate regulatory and legal framework.”
Mabbott also pointed out that there are no specific Australian laws regulating AI, big data or algorithmic decision making. However, there are a range of laws that impact these systems and technologies.
“Some of the laws that are highly relevant in this space are laws in relation to things such as privacy and data security. Corporate Law is important, so too are corporate governance and risk management responsibilities,” he said.
“Things such as intellectual property laws, competition law and anti-discrimination laws may also apply in the context of AI.”
Trust levels based on application
Applications for AI are another factor impacting levels of trust, according to the report.
Lower levels of trust are attributed to Human Resources use cases such as in hiring or promotional decisions, while greater levels of trust are associated with AI in healthcare.
“Trust in AI in the workplace tends to be higher where that AI application is being used for our benefit. So, if it's augmenting my work, if it's making me more productive, if it's making my day easier, if it's delivering a better outcome to customers, then I tend to have a higher level of trust in that application,” said Mabbott.
“You've got a higher degree of trust in the application, because the benefit that you're seeking to drive is clear to you.”
Differences in trust levels also exist between managers and non-managers.
The report reveals that the majority of managers are more supportive of AI and believe that it will create more jobs than it will eliminate.
“This reflects a broader trend of managers being more comfortable, trusting and supportive of AI use at work than other employees, with manual workers the least comfortable and trusting of AI at work. Given managers are typically the drivers of AI adoption in organisations, these differing views may cause tensions in the implementation of AI at work,” the report states.
Younger generations and the university educated also show greater levels of trust in AI than older generations and those without a university education. This may come down to having a better understanding of the technology.
According to the report, “They are more likely to believe AI will create jobs, but also more aware that AI can perform key aspects of their work. They are more confident in entities to develop, use and govern AI, and more likely to believe that current safeguards are sufficient to make AI use safe. It is noteworthy that we see very few meaningful differences across gender in trust and attitudes towards AI.”