Image: Marketcomlabo / Wikimedia Commons

Universities struggle to detect flagrant AI usage amongst students, leaving few punished

The vast increase in students relying on AI for university work and assessments has led to concerns about the disproportionately small number of students sanctioned for breach of rules. 

According to a survey by the Higher Education Policy Institute, over 90% of undergraduate students attending Russell Group universities use AI large language models and almost a fifth confess to copying from chatbots directly.  

Despite this, less than one in 400 of such students were penalised for misusing AI, according to The Times. 

A report by Josh Freeman revealed that students using AI generally do so to save time and improve the quality of their work, with the main reasons for students not using AI being fear of university sanctions and “the fear of getting false or biased results”.

Often, universities permit the limited use of AI, such as allowing students to harness chatbots to generate lecture notes. However, 47% of university students last year admit that AI made cheating “easier”, according to the publisher Wiley. 

Despite the commitment of all vice-chancellors in the group two years ago to “academic rigour and integrity” and “consistency”, nine of the 24 Russell Group universities claim that they did not record data about disciplining the use of AI

Despite the commitment of all vice-chancellors in the group two years ago to “academic rigour and integrity” and “consistency”, nine of the 24 Russell Group universities claim that they did not record data about disciplining the use of AI. 

From those that did record this data, out of a population of over 20,000 undergraduate students, 74 were investigated and 51 punished on average. Further cases might have been pursued at departmental level, according to the majority of these institutions. 

Durham University, King’s College London, Leeds University, and Queen Mary University, London all confirmed that AI misuse had caused them to expel undergraduates. 

Freeman outlines the difficulties that arise when trying to detect AI use, since “using multiple chatbots or prompting AI to write in a student’s own style” can mask the use of AI. This means that survey research probably does not account for the full extent of AI misuse.  

The Russell Group stated: “The rise of generative AI tools presents a shared challenge for the sector as it makes a profound impact on the way we teach and learn.”  

They added that “universities are interrogating and adapting their own teaching and practices and will continue to do so to ensure integrity, ethical use and equality of access”, as they aim to “develop policies that help staff and students become AI literate”. 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.