ChatGPT used in peer reviews of Australian Research Council grant applications

By

Could be a breach of confidentiality, ARC warns.

Academics have alleged that some peer reviews of grant applications are being written with ChatGPT, prompting the Australian Research Council (ARC) to warn academics that feeding their peers’ work into generative AI models could be a breach of confidentiality.

ChatGPT used in peer reviews of Australian Research Council grant applications

ARC_Tracker tweeted on Friday that it had received reports "that some ARC discovery projects assessor reports have been produced with ChatGPT."

Discovery projects are multi-year research programs vying for up to $500,000 in government grants. Typically only 15-to-20 percent of applicants are funded, and the review process is rigorous.

But ARC_Tracker said some research teams had received assessor reports that were just a "generic regurgitation" of their applications, with at least one containing the text "regenerate response" - a telltale sign that it was generated by ChatGPT.

The Twitter user behind the ARC_Tracker account - who is a researcher at an Australian university - told iTnews that it appeared some assessment reports were AI-generated.

“Researchers who’ve led proposals have sent me (all or parts of) some expert assessments they received and they read exactly like a simple ChatGPT summary of a proposal without any critique, opinion, insight or assessment at all," the ARC_Tracker account owner told iTnews.

“The researchers tell me the phrases used in these assessments are very simple rearrangements of the phrases already in their grant proposals.

“They have also commented that they put their own grant proposals into ChatGPT and got very similar summaries as they see in the assessment…

"One assessment, from one researcher, had the tell-tale “regenerate response” text at the bottom of (one section of) the assessment. This is what ChatGPT shows at the bottom of its response page.

"It’s smoking gun evidence that ChatGPT was used to generate their assessment text.” 

ARC has since released a statement advising peer reviewers not to use AI as part of their assessments.

“Release of material that is not your own outside of the closed research management system, including into generative AI tools, may constitute a breach of confidentiality," the council said.

It added that it would update guidance on AI use "in the near future.”

An ARC spokesperson told iTnews that although "generative artificial intelligence (AI) tools such as ChatGPT are not explicitly named" in policies that apply to peer reviews, "common principles of confidentiality apply across both existing and emerging channels through which confidential information may be inappropriately disclosed.

"The ARC is not alone in considering a range of issues regarding the use of generative AI that uses algorithms to create new content and that may present confidentiality and security challenges for research and for grant program administration...Any concerns raised by applicants will be considered and responded to as per our policies," the spokesperson said.

The spokesperson did not comment on how common the use of ChatGPT in peer reviews is; or if it's considering using AI detection models to identify the use of ChatGPT by academics, as some universities, such as the University of Melbourne, have done to identify use by students. 

ARC_Tracker said that the factors underlying ChatGPT-assisted peer-reviews of grant applications included academics’ unmanageable workloads and ARC taking too long to release clear policies about the use of AI.

“There’s been a long-running problem of ARC grant proposals being extremely long," the account owner said.

“If there’s just three or four investigators on the grant, it can easily run to 100-plus pages long. I’ve assessed ones that are 150 pages long.

“Assessors often aren’t given time to review anything in their academic workload model at universities (which have many problems themselves) and so the peer review process is generally under a lot of pressure.”

ARC_Tracker said that another reason peer reviewers may have resorted to ChatGPT is that “there’s nothing in any of the ARC’s policies explicitly about AI generative text engines" that prohibit or restrict their use.

ARC_Tracker added that the statement ARC released did not explicitly address the use of generative AI. 

“Sure, the ARC’s statement…says there are general requirements, under their confidentiality policy, not to upload other people’s grant text to external websites," the account owner said.

“But take a read of that policy. Is it clear, simple and definitely - without any doubt - clearly warns people not to use ChatGPT or similar AI services? I don’t think so. I’ve read it - it’s pretty difficult to parse. 

“The ARC do this all the time: when something goes wrong they rely on some worn old policy, that no one can read easily, instead of pre-empting problems that the community warns them about well in advance.”

Got a news tip for our journalists? Share it with us anonymously here.
Copyright © iTnews.com.au . All rights reserved.
Tags:

Most Read Articles

RBA reveals three-year project to upgrade payment IT systems

RBA reveals three-year project to upgrade payment IT systems

Microsoft ending support for Windows 10 could send 240 million PCs to landfills

Microsoft ending support for Windows 10 could send 240 million PCs to landfills

Intel spins out AI software firm

Intel spins out AI software firm

CBA backs GitHub automations to get new features to customers faster

CBA backs GitHub automations to get new features to customers faster

Log In

  |  Forgot your password?