Artificial Intelligence, Generative AI, and Large Language Models in education
Artificial Intelligence (AI) refers to advanced systems designed to perform tasks that typically require human intelligence, such as problem-solving, decision-making, and language comprehension. In the context of education, AI-powered technologies offer opportunities to innovate teaching and learning practices, addressing some of the most significant challenges in modern education. However, the rapid development of AI also brings risks and ethical concerns that require careful consideration.
At Maastricht University, the integration of AI aligns with the core principles of Problem-Based Learning (PBL): Contextual, Collaborative, Constructive, and Self-Directed (CCCS). This ensures that AI technologies support meaningful learning while fostering critical thinking, digital literacy, and ethical awareness.
Go directly to:
- Generative AI: a subset of AI
- Large Language Models (LLMs): the core of GenAI
- Options for educational design and teaching & learning activities
- GenAI for students
- Assessing AI-influenced assignments: strategies and considerations
- GenAI tools for educational uses
- GenAI and safeguarding UM's culture of integrity
- Deterrence: tools for detecting AI-generated text
- Disclaimer
Generative AI (GenAI): a subset of AI
Generative AI (GenAI) encompasses advanced machine learning models capable of creating content –text, images, audio, and even code – based on user input. By analysing extensive datasets, GenAI tools predict and generate outputs that are contextually relevant and tailored to specific tasks.
Applications of GenAI in education:
- For students: personalised tutoring, content summaries, and study aids
- For educators: automated lesson plans, quiz generation, and feedback systems
- For institutions: enhanced accessibility with translations and adaptive learning tools
While GenAI simplifies complex tasks and fosters creativity, its use requires responsible practices to address concerns such as data privacy, biases, and academic integrity.
Large Language Models (LLMs): the core of GenAI
Large Language Models (LLMs) are advanced deep-learning algorithms designed to process and generate human-like text. Trained on massive datasets, including web pages, books, and academic literature, LLMs can perform tasks like summarising, translating, and editing text.
How LLMs work
LLMs operate probabilistically, predicting the most likely sequence of words based on user input. This process enables them to generate coherent and contextually appropriate responses. However, these models do not "understand" language like humans; their outputs are based on patterns rather than genuine comprehension.
Examples of LLMs
- ChatGPT: a prominent example of an LLM, ChatGPT has gained global recognition for its versatility. It can summarise articles, draft essays, generate code, and provide conversational support. ChatGPT-4, launched in 2023, introduced expanded text processing, multimodal capabilities (handling both text and images), and improved reasoning.
- Other LLMs: competing models like Google’s BARD, Baidu’s ERNIE, and Meta’s LLaMA offer similar capabilities, each with unique strengths and applications.
Options for educational design and teaching & learning activities
Educational design in PBL can meaningfully integrate AI tools to support digital/AI literacy and the development of critical thinking competences. Maastricht University's Problem-Based Learning model is based on four key learning principles: constructive, contextual, collaborative and self-directed learning. PBL hence offers a flexible basis for diverse and creative learning formats.
Broadly put, you can consider making students learn with and about LLMs by integrating LLMs in the learning design/activities as a critical friend or making students reflect or critisise LLM output. LLMs can also be used to stimulate group learning as a brainstorming partner, an idea generator, a moderator or a member of the discussion (taking different perspectives).
Resource to help teachers and students explore Large Language Models in PBL
The EDLAB innovation project The Impact of LLMs on PBL collected inspirational use cases from UM teaching staff who experimented with LLMs in their courses. The resulting document is designed to encourage others to explore these tools, offering ideas to engage students in new ways and enhance teaching practices.
Tips for integrating ChatGPT into your educational design
- Provide students with ChatGPT-produced written content and ask students to improve it both in terms of content and academic standards.
- Have students critically relate different sources instead of merely summarising them (which can be done by ChatGPT).
- Have students provide key quotes from their readings and explain why they capture the gist of what they read.
- Educate students on the use of AI tools like ChatGPT: what is it? What can it produce? What are the pitfalls and disadvantages?
- Have chatGPT answer the learning goals, then have students discuss these answers based on their readings.
Assessing AI-influenced assignments: strategies and considerations
The rise of Large Language Models (LLMs) like ChatGPT has introduced new challenges in assessing students’ mastery of learning outcomes, particularly for written assignments completed in uncontrolled environments. These models can generate human-like text, making it harder to verify authorship and evaluate the originality of submitted work. To address these challenges, educators must rethink assessment designs to ensure integrity while fostering critical thinking and learning.
Strategies for AI-resilient assessment design
- Integrate non-written components: complement written assignments with presentations, debates, or creative outputs like mind maps, diagrams, or vlogs to tie student work to their personal understanding and creativity.
- Embed course-specific content: require students to incorporate module content into their work, referencing specific cases, sources, or class discussions to ground their analysis in course material.
- Focus on higher-order thinking: assign tasks that require nuanced argumentation, detailed analysis of multimedia (images, audio, video), or evaluations of recent events outside AI training data.
- Promote peer reviews: encourage students to review and critique each other’s assignments based on module content, with their review quality assessed as part of the grade.
- Onsite controlled assessments: include in-person exams or oral components to directly assess students’ comprehension and originality.
- Extended text assignments: Require longer outputs that exceed typical AI prompt-response windows to deter reliance on automated tools.
Best practices for incorporating AI tools into assessment
- Critical evaluation: allow students to use tools like ChatGPT to generate answers but require them to critically assess the output, highlighting strengths, weaknesses, and inaccuracies with references to course materials.
- Transparency in AI use: ask students to clearly label any AI-generated content in their assignments to ensure accountability.
- Focus on learning outcomes: design assessments that evaluate students’ understanding and critical thinking rather than relying solely on the quality of AI-generated responses.
Tips for identifying and managing AI-generated content
- Recognise patterns: look for signs of AI use, such as unusual language, repetitive phrasing, or inconsistent style. AI-generated content may lack depth or originality in reasoning.
- Use detection tools cautiously: tools like Originality.ai can help identify AI-generated content, but they provide probabilistic results and should only supplement broader evaluation methods.
- Check for plagiarism: AI tools may paraphrase or replicate existing content without attribution. Use plagiarism detection software to identify overlaps with external sources.
Addressing AI misuse in assessment
In cases of suspected AI misuse, follow institutional guidelines:
- Investigate suspicions: forward cases to the Board of Examiners for review.
- Sanctioning: if confirmed, AI misuse may be considered plagiarism or fraud, as it hinders the ability to accurately evaluate a student’s knowledge or skills.
GenAI tools for educational uses
To download this table in PDF format, click here.
Category | Action | Tools | Strengths | Weaknesses |
---|---|---|---|---|
Idea generation | Brainstorming, overcoming writer’s block | ChatGPT, Claude, Perplexity, Gemini | Highly creative and flexible responses, great for ideation. | May generate irrelevant ideas or require refinement. |
Research assistance | Searching for online or scholarly sources | Perplexity, Consensus, Research Rabbit, SciSpace, Scite, Semantic Scholar, Atlas.ai | Provides curated results and summarises academic resources. | May overlook niche sources or prioritise general information. |
Writing assistance | Correcting grammar, paraphrasing, rephrasing, or adjusting tone/register | ChatGPT, QuillBot, Grammarly | Exceptional language fluency, grammar accuracy, and tone adjustment. | Limited understanding of specialised terminology. |
Visual content creation | Designing graphics, diagrams, and infographics | Canva, Leonardo.ai, Adobe Firefly, MidJourney | Intuitive tools for creating visually appealing educational resources. | Advanced features may require paid versions; high reliance on prompts for specific outputs. |
Coding assistance | Generating or debugging code | GitHub Copilot, ChatGPT, Llama | Excellent at automating code snippets and debugging logic. | Struggles with highly complex algorithms or specific frameworks. |
Language translation | Translating texts to/from various languages | DeepL, Google Translate | Accurate and context-sensitive translations for academic purposes. | Struggles with idiomatic expressions or highly technical jargon. |
Feedback generation | Providing feedback on ideas or student work | ChatGPT, Claude | Can offer constructive suggestions to refine work. | May lack depth in specialised fields. |
Data analysis | Analysing quantitative/qualitative data | ChatGPT, GitHub Copilot, Atlas.ai | Efficient at identifying patterns and summarising findings. | Limited statistical accuracy compared to dedicated software. |
Transcription tools | Converting spoken content into text | Whisper AI, Otter.ai | High accuracy for converting lectures and discussions into text. | Accuracy depends on audio quality and accent variations. |
Visualisation Tools | Creating charts, mind maps, or tables | Canva, Biorender | Enables the creation of professional-quality educational visuals. | Some tools have steep learning curves for beginners. |
How to sign up for GenAI tools
- ChatGPT: visit OpenAI’s website. Free version available; paid subscriptions unlock additional features.
- Claude: access via Anthropic's platform. Free trials often available.
- Leonardo.ai: sign up at Leonardo’s website. Popular for image creation.
- Canva: visit Canva. Education-specific plans available.
- DeepL: access at DeepL. Free and paid versions for translations.
- Whisper AI: available via OpenAI.
- Otter.ai: sign up at Otter for transcription services.
GenAI and safeguarding UM's culture of integrity
Maastricht University highly values its culture of academic integrity, with attention in each educational programme to ethical behaviour, social etiquette and professional attitude. Given the speedy development of AI tools and their impact on education, it is worth (re-) emphasising expectations from students about academic integrity and rules and regulation concerning fraud in the academic context.
Tips to foster a culture of integrity and ethical behaviour.
- Trust your students and trust your culture of academic integrity
- Accept an inevitable loss of control over how students learn
- Make students aware of (academic) integrity policies
- Encourage intrinsic motivation
- Provide information on rules and regulations in course books and Canvas
- In tutorials, talk with students about instructions, rules and expectations
- Ask students to affirm that their submissions are their own work
Deterrence: tools for detecting AI-generated text
At the moment of writing (January 2025), there is no reliable AI text detector. Detecting a rewritten AI-generated text (hybrid text) is also very difficult. Still, a couple of promising tools can give a semi-reliable indication of whether a text is AI-generated. A suspected text can be submitted to stand-alone tools such as:
Turnitin Originality is Maastricht University’s plagiarism tool integrated with Canvas. This plagiarism tool is only available to UM employees (teachers and tutors). Furthermore, automated watermarking of LLM-generated output can be detected faster by stand-alone tools and plagiarism applications. This feature is not bulletproof yet and is currently under development by OpenAI.
Disclaimer
The information on this webpage presents information about and examples of GenAI usage for educational design, delivery and assessment practices. Certain GenAI practices in education may not be encouraged or allowed within your faculty’s policy framework and/or rules and regulations. When in doubt, please check the recent faculty rules on the use of GenAI in education.
UM doesn’t recommend to use GenAI for assessment of students since the EU AI-act flags such practices as high risk. More specifically the following assessment practices with the help of GenAI should be avoided:
- Determining access or admission of students to educational institutions or programmes
- Evaluating students' learning outcomes
- Assessing the appropriate educational level for a student
- Monitoring and detecting unauthorised behaviour by students during exams