Image: Chokniti Khongchum / Pexels

Increase in poor-quality research papers due to the use of AI threatens the scientific field, reveals new study

A study by the University of Surrey warns of Artificial Intelligence’s (AI) threat to the integrity of scientific research, as floods of new research may be “superficial and oversimplified”, diluting academic publications.

Research found that papers of a lower quality had become far more prominent, driven by poor research practices, including focussing on single variables and arbitrarily choosing data subsets.

Matt Spick, a lecturer at the University of Surrey in Health and Biomedical Data Analytics, stated: “We’ve seen a surge of papers that look scientific, but don’t hold up to scrutiny”.

He added that the research conducted with assistance from AI was “science fiction”.

Critics have argued that AI-supported papers fail to consider a full range of cases in theorising diagnoses, limiting the usefulness of the research

Critics have argued that AI-supported papers fail to consider a full range of  cases in theorising diagnoses, limiting the usefulness of the research. This is of particular concern within medical journals.

Easy access to data and language models allows AI to work quickly, but not to the appropriate standards. AI use in key industries like medicine and banking could lead to unreliable decisions that may affect millions.

AI struggles to apply a multi-level approach to analysing data and often fails to consider real-world factors. AI can be of benefit to scientific research, but critics suggest it must support research, not drive it.

However, some consider AI a valuable tool. A report from Science Direct examining 24 studies across 6 domains found that AI, such as ChatGPT, showed significant potential for improving content structure, data management, and outreach and communication.

The study conducted by the University of Surrey calls for greater transparency in how models work, in order to give authors a better understanding of how data is used and allowing human intervention to fill in the gaps of where the AI cannot cover, such as how data interacts with tangible issues.

Top universities like the University of Singapore and Oxford have begun to devise “philosophically-grounded ethical guidelines for using Large Language Models, in academic writing”.

Rather than banning AI, which is unlikely to be enforceable, academics have argued for improvements in the transparency of models and for academics to have the power to see how AI uses their data.  

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.