As has been suggested throughout this Library guide, AI can be a valuable tool for streamlining research processes and generating ideas. However, researchers should consider the following guidelines to ensure the appropriate and responsible use of AI in their work.
- Understanding AI tools: Before using generative AI, researchers need to explore how these tools function, including their data sources, capabilities, and limitations. While AI can assist with tasks such as searching and summarising literature, structuring research papers and generating outlines, its outputs tend to be predictable and may lack accuracy, originality or depth. AI should therefore be viewed as an assistant, not an expert. A thorough understanding of AI tools is essential for making informed decisions and critically assessing AI-generated outputs.
- Checking for accurracy and bias: AI systems can exhibit inaccuracies, gaps, and biases inherent in their training data, leading to partial, nonsensical or misleading results. To mitigate these risks, it's important to cross-check AI-generated outcomes with reliable sources and apply a rigorous, analytical approach to ensure accuracy, fairness and reliability.
- Upholding responsible and ethical practice: Researchers are cautioned against using AI to replace thinking or decision making. It is the researcher’s responsibility to ensure the accuracy, relevance and ethical use of AI-generated results. Ethical practice also involves transparency in disclosing AI's role in the research process. Researchers must carefully justify their use of AI, ensuring that their work is grounded in human expertise.
- Continuous learning: The rapid evolution of AI involves continuous learning and professional development. Researchers are recommended to strengthen their information literacy and critical analysis skills to use AI effectively and ethically. Engaging with AI specialists and data scientists can further enhance the quality and rigour of AI-assisted research.