Thousands of students caught using AI amid mounting academic integrity epidemic
The number of UK university students caught cheating using AI for assignments has rapidly increased in 2025, with apps like ChatGPT growing in popularity.
Generative AI, while still a relatively new concept in academia, has been found useful as a guide or a grammar checking tool, and for helping students with planning essays or practising questions before exams.
However, new figures have shown that students are increasingly turning to the tool as a cheating method, posing a serious threat to the academic integrity of their work.
7,000 proven cases of cheating through AI misuse were recorded by a Guardian survey in 2023-24, equating to 5.1 cases per every 1,000 students.
A survey by the Higher Education Policy Institute in February found that 88% of students are now using generative AI for assessments, up from 53% in 2024
Current figures estimate that this number will likely rise to around 7.5 per 1,000 by the end of 2025.
Additionally, a survey by the Higher Education Policy Institute in February found that 88% of students are now using generative AI for assessments, up from 53% in 2024.
A BBC interview with a student who got caught for cheating with AI in 2024 reflected some of the reasons for the growing academic integrity epidemic.
The student regarded the “incredible stress” of university assignments and “enormous pressure to do well” as motivations for using AI, as well as illness and the struggle to keep up with tight deadlines.
The proliferation of AI usage has consequently raised questions about student wellbeing and the effectiveness of support for struggling students.
Universities are adapting strategies to ensure the authenticity of students’ work, as the abuse of AI in assignments continues to threaten academic integrity.
Universities must determine how to harness the benefits and mitigate the risks [of AI] to prepare students for the jobs of the future
Government spokesperson
This has involved using plagiarism-checking software such as Turnitin, and the increasing reversion to traditional assessment methods such as handwritten exams under supervised conditions.
GenAI usage remains difficult to expose, however, with researchers at the University of Reading last year able to fool their own markers by submitting AI-generated work that was not detected 94% of the time.
A government spokesperson said: “Universities must determine how to harness the benefits and mitigate the risks [of AI] to prepare students for the jobs of the future.”
Learning to adapt to the rise of AI, universities across the country are enforcing different rules and codes of conduct to minimise the likelihood of students cheating.
While some universities have banned the use of AI entirely and clarified the penalties for students suspected of academic misconduct, some have allowed it with strict guidelines and full references and citations.
However, in line with intentions to embrace the benefits of AI, universities continue to experiment with the technology’s wide-ranging capabilities alongside combating its threats to individuality.
Comments