Generative AI provides an arguably powerful tool for advancing research and is being increasingly used by various stakeholders in different research fields. A study carried out by Nature (Van Noorden & Perkel, 2023), which surveyed 1659 international scientists, suggests that the technology is being utilised for purposes as diverse as editing or translating research texts, data processing or coding, and generating new research hypotheses.
As the same survey suggests, however, Generative AI can also pose unique challenges to research integrity, with scientists reporting a number of concerns including potential data bias and fraud. It is important, therefore, for researchers to understand the risks involved in the use of AI technology and to find ways of addressing these responsibly and ethically.
Van Noorden, R., & Perkel, J. M. (2023, September 27). AI and science: What 1,600 researchers think. Nature. https://www.nature.com/articles/d41586-023-02980-0
An expert working group of researchers at RMIT produced a white paper (Research Integrity and Generative AI (REGAI) Working Group, 2023) which identifies seven key areas of risk that Generative AI poses for research integrity.
According to the paper, Generative AI systems increase the risk of false research outputs and the generation of misinformation, as they can produce research text or images that seem authoritative but may be based on inaccuracies or fabrications. This risk is exacerbated by the opaque nature of AI models, which rely on complex statistical parameters that are not easily interpretable, making it difficult to understand or verify their outputs. Posing further challenges is the lack of reproducibility in AI-generated results. The complex and unpredictable nature of AI systems means they rarely produce the same output twice, complicating efforts to replicate findings—a fundamental aspect of scientific research.
Yet another significant concern is bias, as AI models trained on existing human texts can perpetuate societal prejudices which can be reflected in the demographic and ideological biases of data sources. Additionally, while convenient, expanding the utility of AI to a wide range of academic tasks (e.g. providing peer review or making editorial decisions) can become risky and requires researchers to remain vigilant about maintaining integrity and verifying the accuracy of AI-generated content. Finally, the sharing of private data with AI systems, which operate outside the secure ecosystems of research institutions, poses risks of data breaches and privacy violations.
RMIT Research Integrity and Generative AI (REGAI) Working Group. (2023). Research integrity and Generative AI at RMIT. RMIT University.
One of the key ways for researchers in Australian universities to address AI research integrity risks lies in upholding the underpinning principles for carrying out research as articulated in The Australian Code for the Responsible Conduct of Research (National Health and Medical Research Council, Australian Research Council & Universities Australia, 2018).
These principles consist of:
Honesty in the development, undertaking and reporting of research
Rigour in the development, undertaking and reporting of research
Transparency in declaring interests and reporting research methodology, data and findings
Fairness in the treatment of others
Respect for research participants, the wider community, animals and the environment
Recognition of the right of Aboriginal and Torres Strait Islander peoples to be engaged in research that affects or is of particular significance to them
Accountability for the development, undertaking and reporting of research
Promotion of responsible research practices.
As in traditional research, closely applying these general principles when using AI tools ensures the production of producing high-quality, credible, and trustworthy research which is enshrined in an honest, ethical and conscientious research culture.
Researchers are also advised to check and closely follow any policies on the use of Generative AI as adopted by various institutional funding bodies and publishers such as:
RMIT Higher Degree by Research (HDR) students additionally need to observe the HDR Submission and Examination Procedure that addresses the use of artificial intelligence in point 21 of the text as follows:
(21) Any use of Artificial Intelligence (AI) in the examination submission must be ethical, responsible and in keeping with principles of academic and research integrity including honesty, transparency, fairness and accountability. Candidates will appropriately declare, attribute and acknowledge use of AI, in keeping with research integrity principles, policy and procedures for responsible authorship and publication of research outputs.
"HDR Submission and Examination Procedure" by RMIT Policy Register
For more information on the appropriate declaration, attribution and acknowledgement of the use of AI, go to the Acknowledging the use of AI section of the Artificial Intelligence referencing guidelines.
National Health and Medical Research Council, Australian Research Council & Universities Australia. (2018). The Australian Code for the Responsible Conduct of Research (R41). National Health and Medical Research Council.
RMIT University offers a range of services and resources to support researchers with research integrity and the responsible conduct of research. These include:
External resources on research integrity and the use of AI include:
This Library guide by RMIT University Library is licensed under a CC BY-NC 4.0 licence, except where otherwise noted. All reasonable efforts have been made to clearly label material where the copyright is owned by a third party and ensure that the copyright owner has consented to this material being presented in this library guide. The RMIT University logo is ‘all rights reserved’.