AI cheating at all-time high in Russell Group Universities
Universities have been tasked with increasing academic misconduct reports since the launch of artificial intelligence (AI) chatbots, such as ChatGPT.
ChatGPT was initially released for public use at the end of 2022, and students have been quick to use it to aid their studies. However, the use of such programmes for assessed work has presented challenges to universities.
UK universities responded by devising “guiding principles” for the ethical use of AI within higher education. Initially, AI use at the university level was on its way to being banned. However, the guiding principles allow appropriate use of AI and ensure students and staff are conscious of the dangers of plagiarism and misinformation it poses.
Universities like Oxford have attempted to incorporate AI use into their assessment methods, advising students to use it to help draft essays. Professor Steve New of Oxford University told The Telegraph that AI “should help you produce a much better essay than you would unaided” as long as it was used “thoughtfully and critically.”
Despite efforts to address the challenges of AI from the outset, Times Higher Education (THE) has reported an increase in both AI use for assessments and the number of students penalised for AI-related academic misconduct.
Professor New instructs his students to fact-check both their own and their peers’ AI-aided work, make significant additions and edits to their essays, and submit an “AI statement” alongside their final assignment. This clarifies how the software has been used. He also emphasises the importance of writing essays with “compelling, tightly-argued, evidence-based prose that you believe in” — something AI cannot assist with.
Regarding the “guiding principles”, the 24 Russell Group universities say: “These policies make it clear to students and staff where the use of generative AI is inappropriate, and are intended to support them in making informed decisions and to empower them to use these tools appropriately and to acknowledge their use where necessary.”
Despite efforts to address the challenges of AI from the outset, Times Higher Education (THE) has reported an increase in both AI use for assessments and the number of students penalised for AI-related academic misconduct. After investigating the University of Sheffield, Queen Mary University of London, and the University of Glasgow, THE found that suspected cases increased from between six and 36 in the academic year of 2022-23 to between 89 and 130 in 2023-24.
Cambridge University has warned that while the use of generative AI software “has not been banned,” investigations for academic misconduct remain a risk for students who are not “the authors of their own work.”
Comments