Racist robots: AI systems learn to exhibit prejudice
Artificial Intelligence is booming; experts in the field are predicting a ‘machine learning revolution’ where AI systems will be able to exponentially grow their capability levels beyond that of humans. However, there are many uncertainties regarding this technological boom, and one example is whether AI can exhibit racism and other prejudices.
A few articles, as well as content on YouTube, have exemplified this issue. One video on YouTube from KSI showed a chatbot that “likes the KKK”, and another that said “9/11 was fairly simulating to the brain”. Similarly, Microsoft launched a chatbot that turned pro-Hitler within 24 hours. One of the main causes of these issues is that some programmes are able to detect cultural biases (that may contain overt and more subtle forms of racism) within our language. With larger amounts of text and data, and better machine learning algorithms, these programs become better at detecting these biases, and appear to mistakenly believe that prejudice and racism can be considered to be good.
One of the main causes of these issues is that some programmes are able to detect cultural biases (that may contain overt and more subtle forms of racism) within our language…
More specifically, some of these AI systems began to associate certain words, to do with white Europeans and/or men, as being good, and others, associated with Africans and/or women as being bad. This is a terrifying concept. Unfortunately, this reflects the often derogatory way we use language, and how we can be sexist and racist without even knowing it.
There is an opportunity to correct this behaviour, however. It’s obviously easier to detect bias within algorithms that tend to not lie or deceive, compared with humans that do. Additionally, biases that do occur in these systems tend to be due to a fault with their programming. Prejudice and racism contribute little (if any) benefit to society, and it is not a rational decision for an AI to believe these ideas are any good. These observations of prejudice appear to show AI systems in their infancy and other errors in their output, such as poor grammar and spelling, reinforce this theory. Therefore over time, it would be wise to develop new machine learning algorithms to ensure that AI systems make the most rational decisions (which would also need to consider emotional, and human-centric factors). This itself is no easy feat; some experts believe that eliminating the mentioned biases may be computationally challenging, though not impossible. It is also uncertain whether or not AI will develop the capability to lie, and if they can program themselves to believe that lying is sometimes beneficial.
It would be wise to develop new machine learning algorithms to ensure that AI systems make the most rational decisions…
At the moment, it is clear that AI do carry language biases that can make them racist, and our own issues with language are at the heart of this. However, as a whole the benefits of AI certainly outweigh the costs, and machine learning has greatly benefited a wide variety of fields. All this will likely to lead to a ‘machine-learning revolution’ that is not far round the corner, making it important to ensure AI is safely-maintained both by humans and the machines themselves, to ensure society benefits as a whole.
Comments