Responsible use of GenAI: risks, precautions, and ethical considerations

To ensure the appropriate and ethical use of Generative AI (GenAI) in education, students and staff must be aware of its risks, limitations, and potential consequences. These considerations extend beyond educational practices to ethics, privacy, copyright, intellectual property, environmental impact, and equity. As GenAI and Large Language Models (LLMs) powered AI assistants such as ChatGPT become more integrated into academic settings, understanding their challenges is crucial.

 

Academic integrity, transparency, and accuracy

  • Plagiarism and fraud: Using GenAI to generate content and present it as one’s own constitutes academic dishonesty, akin to plagiarism or fraud.
  • Lack of transparency: LLMs are often described as "black-box" models due to their complexity. Their internal mechanisms are not fully interpretable, making it difficult to trace biases, errors, or problematic outputs. Advancements in multimodal AI (integrating text and visual inputs) add further complexity.
  • Hallucinations: GenAI can fabricate information, producing responses that are misleading, nonsensical, or completely false, particularly in specialised or less common areas. Since LLMs function as language models, not knowledge models, users must verify AI-generated content, especially in fields requiring high factual accuracy.
  • Bias in AI outputs: AI systems reflect biases present in their training data, which may include historical texts, social media, and other sources. Despite efforts to mitigate bias, LLMs can still generate discriminatory or prejudiced outputs, potentially reinforcing stereotypes. Critical evaluation of AI-generated content is essential.

Privacy and data protection

  • Data collection risks: GenAI models are trained on vast datasets scraped from the internet, potentially including personal and private information without consent.
  • User input risks: Personal or sensitive data entered into AI tools may be retained for further model training. Some tools, such as ChatGPT, store user interactions to refine performance, raising concerns about compliance with privacy regulations like the EU’s General Data Protection Regulation (GDPR) and FERPA.
  • Institutional compliance: Users must ensure that AI tools comply with institutional and legal privacy policies before using them for academic or research purposes.
  • Security breaches: Privacy leaks have occurred in AI systems. For example, in March 2023, OpenAI temporarily shut down ChatGPT due to a bug that exposed users’ private chat titles to other users. It is good practice not to share personal, confidential, or sensitive information with AI tools.
  • Mitigating risks:
    • Understand data retention policies – some AI tools store inputs for optimisation.
    • Manage privacy settings – certain tools allow users to disable data sharing.
    • Educate yourself on what data can and cannot be input into AI tools to safeguard privacy.

Copyright and intellectual property

  • AI and copyright risks: GenAI models may inadvertently reproduce copyrighted content from their training data, raising potential copyright infringement issues. If users unknowingly submit or publish such content, they may violate copyright laws.
  • Ownership of user data: When users enter text into GenAI tools, they often cede intellectual property rights over that data. Developers may store and reuse this content without explicit consent or compensation.
  • Impact on academic work: Submitting student work to AI-powered plagiarism detection tools could result in unauthorised inclusion of that content in AI training datasets, potentially compromising intellectual property.

Environmental impact of AI

  • Energy consumption: GenAI and LLMs require significant computing power, leading to high energy consumption and carbon emissions.
  • E-waste and resource use: The increasing demand for AI-driven systems fuels the need for more powerful hardware, leading to greater electronic waste (e-waste) and the depletion of natural resources used in semiconductor manufacturing.
  • Water usage: AI data centres consume vast amounts of water for cooling, further adding to the environmental footprint. As the demand for AI tools grows, their sustainability must be considered.

Equity and accessibility concerns

  • Unequal access: Not all students have equal access to AI tools. Premium versions often provide superior output, giving students who can afford them an academic advantage over those who cannot.
  • Digital divide: Some students may be less familiar with AI tools, leading to disparities in their ability to leverage these technologies effectively. 

Conclusion

While GenAI and LLMs present powerful opportunities for education, they also introduce significant risks related to accuracy, privacy, intellectual property, environmental sustainability, and equity. Students and educators should approach AI-generated content critically, ensuring responsible and ethical use while avoiding potential harms. Refer to the UM-wide as well as faculty specific policies to support the safe and fair integration of AI tools in academic settings.