Image: mohamed mahmoud hassan/ PublicDomainPictures

Warwick Research: Responsible AI

Over the last couple of years, there has been a renewed emphasis on Online Safety. Recently, this issue can be seen within the context of content moderation and questions surrounding who is responsible for monitoring the internet.

These issues have been of particular prominence this week as the US Supreme Court struggles over section 230 and the wider question of whether Social Media companies should be regarded as free-speech platforms or publishers, the distinction being where ultimate responsibility for content lies – is this the responsibility of individual users or platforms? How does this apply in cases where algorithms are used to promote popular content but mistakenly promote harmful content?

Across the pond here in the UK, the Government has been toiling with similar questions as it looks to deliver on its Online Safety bill which aims to make the UK the ‘safest’ place to be on the internet. Amid this, Warwick University researchers like Shweta Singh, Assistant Professor of Information Systems and Management, have sought to seek innovative solutions: ‘Responsible AI’. 

What is responsible AI and what are its uses?  

The aim of Responsible AI focuses on creating unbiased algorithms that can be implemented to better sort and promote data in its various forms with the goal of producing results that more fairly reflect reality. 

Algorithms are sets of rules and processes used to sort inputs such as data. In modern-day computing, these are used in a multitude of ways such as in targeted advertising, internet search results (like the order in which these results appear), and the promotion of various forms of content on social media platforms.  

The use of these algorithms has also been increasingly used in other applications such as employment applications, allowing businesses to faster process large numbers of candidate applications.  

Recently, many criticisms of existing AI have focused on the sorts of content that they promote and the current negative impacts this can have on influencing younger audiences in particular. However, while such algorithms can seem problematic, Shweta Singh and other researchers at WBS highlight how, if used responsibly, these could be used to help promote a fairer and more just society.  

In an interview with Singh, she put particular emphasis on this idea, highlighting that many of these algorithms currently reflect the existing biases of the makers which then, in turn, leads to bias when these systems interpret the results. This Singh pointed out, was often unintentional and arises from the ways we are socialised and brought up.  

At Warwick, this has led researchers like Singh to pay particularly close attention to the role of language and the role this plays in shaping discrimination and the outcomes of these algorithms. Singh commented that this starts with understanding how humans discriminate and, from this, how we can go on to design responsible AI which can be much more ethical and unbiased. Thus, researchers like those at Warwick argue that if AIs are created with these considerations in mind, they could present a powerful tool for helping us to create a more equitable society.  

In doing so, Singh points to the real possibility of being able to use Responsible AI to help counter human prejudices. One scenario this could be used, Singh suggested, was in combating employment discrimination where currently there is evidence that highlights the influence of individuals’ names (and more so the associated stereotypes or prejudices with this) in the application process. If done correctly with AI however, this could perhaps be removed.  

Similarly, Singh pointed to the possibility of Responsible AI being used in helping to more consistently determine whether individuals should be given bail in countries like the US, an area often criticised for discriminatory practices.  

Part of this vision also focuses on examining and improving the wider context of the information produced and analysed or, in the case of social media platforms, promoted to users. Responsible use of AI could, for example, help to better promote a wider range of posts to users, helping to combat issues like pressures to conform to certain beauty standards. 

At Warwick, this research into the development of Responsible AI has taken a particular focus on five key harms including: suicide, anorexia, bullying, pornography, and child violence. 

 Where do Responsible AI stand today? 

When asked about why Responsible AI hadn’t already been implemented, Singh highlighted that part of the issue lay with the fact that until fairly recently, it has been unclear what could be achieved by using AI, particularly from the perspective of policymakers. In a prior Warwick press statement, Singh stated: 

“Here in the UK the government’s flagship internet regulation – the online safety bill, nearly four years in the making – is treading a difficult path between free speech and protection as it enters what many hope is the final stage. 

“But legislators need a better understanding of the technology they are seeking to govern. If tech businesses know that an extra level of Responsible AI, that could better govern reams of content, they nonetheless have little incentive to impose it.” 

“Whistle-blowers in recent years have spoken out against Meta’s algorithms and moderation methods and their harmful impact on individuals, accusing them of putting profits over people. 

“These are commercial platforms – fewer users mean less cash. But if regulators understood what is possible – ‘intelligent’ technology that reads between the lines and sifts benign communication from the sinister, they could demand its presence in the laws they’re seeking to pass.” 

Therefore, much of the power of this research seems to be in its ability to promote what is possible to policy makers, giving them broader horizons on the question of online safety and who is responsible.   

A big thanks to all those involved in helping with the creation of the above article including Shweta Singh, who agreed to an interview with The Boar. 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.