Warwick study shows that dependence on AI can lead to ‘less accurate cancer diagnosis’
A study by psychologists from the University of Warwick has suggested that an over-reliance on artificial intelligence (AI) can lead to less accurate cancer diagnoses and treatment.
The study was published in Psychonomic Bulletin & Review on 24 October to mark the end of Breast Cancer Awareness Month.
It examined how the perceived accuracy of AI prompts affected medical professionals’ identification of cancer. Four different types of cancer were used within the study to simulate a clinical setting.
It found that participants who were told the accuracy of the AI prompts were far less likely to succeed in identifying cancer than those who were not made aware of the AI’s accuracy.
Participants were prepared to accept the recommendation of the AI even if it went against their judgment
The psychologists suggested that participants were prepared to accept the recommendation of the AI even if it went against their judgment.
The research also found that overall improvement in cancer diagnosis with AI was limited. This was seen regardless of how transparent the researchers were about the AI’s accuracy level.
Dr Melina Kunar, University of Warwick Psychology Department Turing Fellow, led the study and suggested that people may “easily veer into overreliance” on AI.
Kunar recommended the implementation of safeguards to monitor accuracy.
She warned: “If we’re not careful, this incredible tool could result in over-reliance on AI against the expertise of our own medical professionals.”
The study comes amidst a rising interest in using AI to treat cancer patients, reduce doctors’ workload, and reduce waiting lists.
£15.5 million in government funding was announced for the rollout of AI to radiotherapy departments across the country in May this year.
Rishi Sunak stated that this funding would “help cut waiting lists and make the UK the number one place for AI innovation”.
AI should be implemented with careful oversight
The University of Warwick study backs up the idea that AI should be implemented with careful oversight.
Dr Caroline Green, Early Career Research Fellow at the University of Oxford’s Institute for Ethics in AI, previously stated to the BBC that: “It is important that people using these tools are properly trained in doing so, meaning they understand and know how to mitigate risks from technological limitations[…] such as the possibility for wrong information being given.”
Comments