Behavioural scientists reimagine the online world
image credit: Unsplash

Behavioural scientists reimagine the online world

 ‘Fake news’ is one of the buzzwords of our politics. When conspiracy theories and incorrect information is increasingly spread by social media networks, we should ask how they operate – what algorithms determine what we see and, in some cases, believe? The lack of transparency is a worrying development in a highly-polarised world. A team of researchers from the Max Planck Institute for Human Development in Berlin (MPI), the University of Bristol and Havard Law School have used behavioural science to suggest how we could promote a more democratic Internet.

If a news story doesn’t gain social media traction, it may as well not exist

According to the Reuters Institute Digital News Report 2019, 55% of the world’s Internet users use social media or search engines to keep up with the news. What this means is that the structure of these platforms help mediate how we interact with a world, and they have huge power. If a news story doesn’t gain social media traction, it may as well not exist.

   Stephen Lewandowsky, co-author of the study and professor of cognitive psychology at the University of Bristol, explained how social networks work. “The aim is to keep users happy for as long as possible so that they stay on the platform. That’s achieved by providing entertainment and establishing a feeling of well-being – which probably explains why many platforms don’t have ‘dislike’ buttons that allow users to down-vote content. The feeling being conveyed is: you’re right. That may be harmless when we share private content such as holiday photos, but it distorts the picture when what’s being spread are radical opinions and untruths.”

Social media is designed to amplify voices and sources we’re likely to agree with

   Social media algorithms also learn what you like, and they share things you’re likely to enjoy or agree with. Say you like content linked to a particular political party on a few occasions – from this, it will learn that you are aligned with this viewpoint, and create a network of like-minded users and similarly-oriented content. This is how we wind up with echo chambers – social media is designed to amplify voices and sources we’re likely to agree with.

   So, with these online structures influencing our thinking, how do we tackle it? The research team intended to help users evaluate the quality of Internet content without curtailing online freedom. Important to their methods was the empowering of the individual – instead of operating a restrictive model, the team hoped to encourage people with interventions based in behavioural science, facilitated by the design of online spaces.

We need to understand human behaviour and take that behaviour into account in design choices

   As Philipp Lorenz-Spreen, lead author of the study said: “The interventions we propose are aimed at empowering individual users to make informed and autonomous decisions in online environments – ideally, without having to rely on independent fact checkers. The architecture of online environments influences users’ behaviour. To change that environment for the better, we need to understand human behaviour and take that behaviour into account in design choices.”

   Their suggested reshaping of the Internet thus draws on two main techniques: ‘nudging’ and ‘boosting’. Nudging highlights important information to help steer behaviour, but without imposing rules or bans. Examples of this include indicating whether the content comes from a trustworthy source, or offering hyperlinks to help verify and contextualise information. That this extra information is here encourages web users to use it.

   As well as building on existing structures, the team also propose new measures to help nudge users. One proposal is the inclusion of data on how other users engaged with certain information – whether they liked it, and whether they read it to the end. The team argue that this approach will prevent the formation of echo chambers, as people will be able to quantify how shared their opinion is. Similarly, making it slightly harder to share unverified information (by asking people to click on a warning first) could encourage further reflection.

   The other technique proposed in boosting, which would enhance user competence in the long term. Web-users could adjust how their newsfeeds are sorted, and social networks could be encouraged to explain how content is weighted. Another boost would be presenting the user with the route through which they obtained a piece of information. As they are able to check the origin and context of content, users would essentially be given the tools to fact-check themselves, and would learn to recognise patterns and manipulation.

   Ralph Hertwig, director of the Centre for Adaptive Rationality at the MPI, said: “The interactive nature of social media could be harnessed to promote diverse democratic dialogue and foster collective intelligence. Our aim is to find ways to strengthen the Internet’s potential to inform decision-making processes in democratic societies, bolstering them rather than undermining them. Global problems like climate change and the coronavirus pandemic require coordinated, collective solutions. And that makes a democratically interconnected online world crucial.”

   Social media is going nowhere, but questions are already being raised about regulation and how it must adapt to an increasingly polarised landscape. Empowering Internet users to engage in a proactive and positive manner may be just the way to help these networks thrive, and it would be hugely beneficial to society too.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.