The US flag
Image: Pixabay

How Twitter bots interfered with the US election

Twitter was oddly quiet of US election news on 3 November. Four years of access to Donald Trump’s presidential ambitions via 280 characters over and above traditional news outlets, I found the silence on election night jarring, to say the least. 

Unlike in 2016, Twitter did not appear to have information that nobody else had. Further, Trump’s combined self-congratulatory and ‘discredit-opponents’ platform was not making headlines this time around.  It appears those who listen to Trump continue to do so whilst the rest of us are not surprised by anything he says. His Twitter feed is now littered with “this claim about election fraud is disputed”. This is a stark contrast to earlier in Trump’s presidency where he made claims questioning Obama’s citizenship, suggested Covid-19 could be cured with bleach and informed North Korea that his nuclear button “works”.

The US Senate report finalised that four years ago social media platforms were used to create discord in the US presidential election. It is estimated that almost 19% of all election-related tweets were auto-generated. Anxious to avoid repeated accusations of permitting Russian meddling Twitter now has a “civic integrity policy,” which is to prevent the platform being used to “purposefully manipulate or interfere in elections or other civic processes”. They label content which they believe promotes manipulated media; have US election labels for candidates and labels for government/state-affiliated media. 

Twitter bots have become more intelligent since then. They are able to make greater use of AI, producing more human-like language in the tweets

One election-influencing tool used in 2016 was automated accounts created to share content known as ‘bots’. These use algorithms (essentially a set of instructions which a computer program will follow) to publish tweets which looked like they were produced by a real user. Twitter bots have become more intelligent since then. They are able to make greater use of AI, producing more human-like language in the tweets. Bots which evade detection and survive on social media platforms can form botnets. These are networks of bots which push similar tweets and messages. Identifying these requires identifying accounts that appear with similar hashtags and tweets at the same time. 

There is some hope according to Emilio Ferrara, a data scientist at the University of Southern California in Los Angeles who studies social-media bots and how they change people’s behaviour.  The evidence suggests, he says, that numbers of people re-tweeting bot content have diminished since 2016.

Ferrara’s team recently published a paper on tweets related to the 2020 election. While human activity outweighs bot activity on average, bot appears concentrated around specific events. For example, one in four accounts relating to the QAnon conspiracy and using its hashtags is a bots account. Bots were also used extensively in the campaign to label Covid-19 as a liberal scam. 

Beyond the election, Facebook has been under recent scrutiny for its content management and spreading, especially in the wake of George Floyd, hate speech, and black lives matter campaigns. Perhaps these events, combined with the increasing amount of time we have all spent online since Covid-19, were what prompted a realisation that big tech companies are accountable for what is shared on their platforms. The CEOs of Facebook, Twitter, and Google faced a grilling in October from Capitol Hill. US Senators questioned the tech companies’ responsibilities to moderate their users’ content. 

It was quiet on election night. This was not the social media blackout that some were calling for but it is significantly more than has ever been done. A Research Briefing for the House of Commons on social media regulation highlights the concern about harmful content and activity on social media. It suggests that the current self-regulation by companies is not enough. The truth is though that sometimes we like the speed of information even when we can’t be sure where it came from. The regulations for lockdown 2.0 for instance – I read those on Twitter first. Our relationship with the media and how we get information has changed and it might be the case that it needs to change back. 

We need to recognise the size and power of Big Tech Firms. Any talk of censorship comes hand in hand with worries over freedom of expression. China chooses to not allow these social media platforms and opts for their own state run versions. However, this removes the right to free speech. There is a halfway point to be met. Whilst freedom of speech should be allowed the freedom to lie should not. It is right that tech companies are under pressure to do more to limit the spread of “non”-information including those created by bots. 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.