Universities struggle to detect flagrant AI usage amongst students, leaving few punished
The vast increase in students relying on AI for university work and assessments has led to concerns about the disproportionately small number of students sanctioned for breach of rules.
According to a survey by the Higher Education Policy Institute, more than 90% of undergraduate students in attendance at Russell Group universities use AI large language models and almost a fifth confess that they have copied from chatbots directly.
This survey reveals that the use of AI tools for assessments has increased by 53% from the previous year.
Despite this, less than one in 400 of such students were penalised for the misuse of AI, according to The Times.
Josh Freeman, policy manager at Hepi, states that: “Given that almost all students use AI to help with their assessments, this data suggests a strikingly low proportion of students are being penalised for AI misuse”. He added that this means that it “is very likely that great numbers of students are slipping under the radar.”
A report by Freeman revealed that students who are using AI generally do so in order to save time and to improve the quality of their work. On the other hand, the main reasons for students opting not to use AI include the fear of sanctions from their universities, as well as “the fear of getting false or biased results”.
Often, universities permit the limited use of AI, such as by allowing students to harness chatbots to generate lecture notes, as well as other study resources for personal use. However, 47% of university students last year admit that AI made cheating “easier”, according to the publisher Wiley.
Despite the commitment of all vice-chancellors in the group two years ago to “academic rigour and integrity” and “consistency”, nine of the 24 Russell Group universities claim that they did not record data about disciplining the use of AI
Despite the commitment of all vice-chancellors in the group two years ago to “academic rigour and integrity” and “consistency”, nine out of the 24 Russell Group universities claim that they did not record data about disciplining the use of AI by students.
From those that did record this data, out of a population of over 20,000 undergraduate students, 74 were investigated and 51 punished on average. Further cases might have been pursued at departmental level, according to the majority of these institutions.
Durham University, King’s College London, Leeds University, and Queen Mary University, London all confirmed that the misuse of AI had led to the expulsion of undergraduates.
Freeman outlines the difficulties that arise when universities are attempting to detect AI use, since “using multiple chatbots or prompting AI to write in a student’s own style” can help to mask AI use. This means it is likely that survey research probably does not account for the full extent of the misuse of AI.
The Russell Group stated: “The rise of generative AI tools presents a shared challenge for the sector as it makes a profound impact on the way we teach and learn.”
They added that “universities are interrogating and adapting their own teaching and practices and will continue to do so to ensure integrity, ethical use and equality of access”, as they continually aim to “develop policies that help staff and students become AI literate”.
Comments