Image: Unsplash

Social media bias and the science of search algorithms

Social media is undergoing a crisis in personal relations at the moment, with all platforms coming under fire for helping exacerbate political tensions in a supposedly more polarised climate. However, this increased polarisation and the idea of the echo chamber are, in effect, logical endpoints of how the platforms actually work. The science of the way these platforms work is incredibly intricate, but it impacts on everybody who uses them. The foundation of these platforms lies with search algorithms.

At its most basic, an algorithm is essentially a list of rules or a process that is followed – they are normally really simple by design, and obeying them exactly leads to some kind of desired outcome. Although it is not necessarily the case, algorithms tend to be run by computers because of the dull repetition many of these tasks require.

The science of the way these platforms work is incredibly intricate, but it impacts on everybody who uses them

Social media companies and search engines work to create a more personalised experience for you, and the news and information they offer comes from a mathematical equation based on two factors – the algorithmic quality (the standard of the content available) and your previous history (if you’ve repeatedly clicked on a particular news site, say, they are more likely to show that site or sites like it).

We’re being very general here but, essentially, say you’ve just joined a social media site. If you’re a 20-year-old student, it will begin by showing you news content that people in similar circumstances have liked (or, if you’ve liked particular content or pages, it will use those to help build a picture of your personality), and then the algorithm will use your response to this content to attempt to find further content that appeals to you. The idea is not to present news you won’t like, in order to keep you using the platform. This has become an issue in recent political campaigns, with voters who search for positive Trump/Clinton stories likely to be exposed to further conservative or liberal news respectively, thus creating the echo chamber.

The idea is not to present news you won’t like, in order to keep you using the platform

Part of the issue stems from the fact that these algorithms are not completely independent, and that leads to the websites that use them being biased in a multitude of ways. Pornographic websites, for example, are really adept at optimising their search profiles, and so a human hand is required to help filter them out – there is nothing to stop the same person suppressing other sites that they disagree with.

Testifying before the US Congress, Mark Zuckerberg told the representatives that he wouldn’t be surprised if there was a left-leaning bias in Silicon Valley. Facebook came under fire in May 2016 when a group of former employees told the technology blog Gizmodo that they routinely suppressed news about prominent conservative figures (most notably in the ‘trending’ section of the website). They also claimed stories by outlets like Brietbart or Newsmax were dismissed unless The New York Times or CNN covered the same article, in which case the more left-leaning publications were promoted. Similarly, Youtube has been criticised for using the radically left-wing Southern Poverty Law Center to influence its decisions as to what content is too offensive to be placed on its site – their arbitration has effectively hit only conservative videos.

Facebook came under fire in May 2016 when a group of former employees told the technology blog Gizmodo that they routinely suppressed news about prominent conservative figure

It’s more than just news coverage, however. This bias also manifests in other ways in other fields. Proprietary algorithms are routinely used, for example, to decide who gets a job interview or a loan – they use statistics and other pre-existing information to make these calls, and this can impact disproportionately on minority groups and poorer communities. To give an extremely overt example, studies find that BAME (black, Asian and minority ethnic) youth and adults are more likely to re-offend than their white counterparts, so an algorithm looking at releasing people on bail is likely to analyse that fact and be statistically biased in favour of white people. There are thousands of factors, obvious and implicit, that influence this algorithm.

The issue in looking at these algorithms is that most companies are incredibly secretive about how they work, and so we can’t realistically offer solutions to the problems they face. The issue the platforms must contend with is how they can bridge the divide being offering the content the user wants to see and ensuring that they aren’t confined to an echo chamber, and that is a question that could very well impact on their future.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.