Generative AI in assessment
This article discusses the implications of generative AI tools like ChatGPT for assessment in higher education. We note there are currently no reliable ways to detect AI-generated content, making it challenging to prevent academic misconduct.
Different assessment types like essays, lab reports, literature reviews, presentations, etc. are at risk of being fabricated by AI. Strategies are suggested to make assessments more robust, including focusing on process over product, oral exams, novel/authentic assessments, collaborations, competency-based tests, and portfolios. Fostering a culture of ethical AI use is also important. Long-term, there may be a shift towards program-level synoptic assessments.
The article recommends further reading on developing sustainable assessments and using AI ethically in education.
History
Accessibility status
- Not accessible, or has not been checked