The Campaign to Stop Killer Robots: autonomous weapons systems and UK universities

In September 2022, Warwick University featured in a report written by the Campaign to Stop Killer Robots, and entitled ‘An Investigation Into the Role of UK Universities In the Development of Autonomous Weapons Systems’. Operating at the university level and up to the UN, the Campaign to Stop Killer Robots was publicly launched in 2013 and is a coalition of 180+ member organisations focused on combatting the emerging threat of autonomous weapons systems (AWS).  The ultimate aim is a new international law to prohibit and regulate autonomous weapons and ensure meaningful human control over the use of force.

Currently weapons systems like the reaper drone are piloted by a human with any decision to fire weapons also being decided by that human – in AWS none of these systems would be controlled by a human. AWS are weapons systems which would be able to select and engage targets, including humans, without meaningful human control. Guided by sensory data these machines would use things like facial recognition to identify targets before automatically engaging.

Autonomous weapons systems have become a hot topic internationally in the years following the Campaign’s formation, with the issue being discussed at the United Nations Convention on Certain Conventional Weapons (CCW) which aims to ban or restrict the use of weapons which are considered excessively injurious or whose effects are indiscriminate.

Autonomy is increasingly being pursued and incorporated into military technology.

While there is some debate over whether weapons that can be described as ‘fully autonomous’ currently exist, there is also clear evidence that autonomy is increasingly being pursued and incorporated into military technology and specifically weapons systems like unmanned aircraft. The Campaign notes that these are being developed in at least 12 countries including the UK. One example is the STM-Kargu 2, a small quadcopter categorised as a ‘loitering munition’ that will fly into its target, thereby detonating its charge. According to a UN report these were used in Libya in 2021, although it is unclear whether they killed any humans. Regardless, their deployment is evidence of this evolving technology making its way to the battlefield.

Other examples include the Jaeger-C and M9 reaper drones. The former is an unmanned ground vehicle developed by GaardTech which is designed to ambush either people or vehicles. In ‘Chariot mode’ it will identify and engage humans with an undisclosed weapon. In ‘Goliath mode,’ it will launch a kamikaze attack on a vehicle by manoeuvring into position and then detonating its armour-piercing charge. While the latter already exists as a manned drone, and has seen extensive use in the Middle East, in 2020 the Pentagon announced that it had awarded a $93.3 million contract to the M9’s manufacturer GA-ASI, to equip the drones with AI technology. This technology would allow the M9-Reaper to fly and identify targets autonomously.

Some advocates of these weapons believe that they will make war more humane by allowing machines to replace humans in combat missions and thereby reducing the number of potential casualties. Others argue that autonomous weapons – when used in conjunction with humans – will act as a force multiplier for boots on the ground and may even be more accurate than humans who can act irrationally and with prejudice. Additionally, some argue that in the long run robots are cheaper to maintain than humans, thus replacing even a small amount of military personnel could result in significant savings.

Autonomous weapons could have either intentional or unintentional bias ingrained in their systems.

The notion that autonomous weapons would be able to act more rationally is fraught with issues according to the Campaign. Ultimately,  the weapons system would need to learn what constitutes a target and in doing so would be at the mercy of the learning materials supplied by the programmer. For instance, in a process called machine learning, facial recognition systems are given data sets (a collection of images of faces) from which they learn the parameters of a face. The problem – as has been noted by researchers like Joy Buolamwini – is that these data sets are vulnerable to bias. For instance, Buolamwini demonstrated how facial recognition systems are far less accurate when identifying non-white faces and female faces. There is great concern amongst the campaign that autonomous weapons could have either intentional or unintentional bias ingrained in their systems. This leads us to question their supposed accuracy.

Moreover, while proponents of killer robots argue that autonomous weapons wouldn’t succumb to the irrationality caused by tiredness, hunger, fear or anger, the Campaign argues that they would also lack important qualities like empathy, conscience, and emotion, as well as an understanding of human rights and human dignity. Consequently, AWS could be programmed with structural biases while also lacking any human qualities that could provide a counterbalance to that bias.

In light of this, the Campaign is deeply concerned by the increasing potential for mass atrocities that a biased AI system would present. It argues that with AWS there will be a lack of clear accountability for unlawful civilian deaths which will make both justice and reparations for a victim’s family much harder to attain.

The Campaign further argues that AWS are incompatible with international human rights law. It specially references the Right to Life (‘no one shall be arbitrarily deprived of life’), the Right to a Remedy and Reparation, and the principle of human dignity.  Aside from the campaign the UN Secretary General has also taken a stance on the issue, declaring that machines with the power and discretion to take lives without human involvement are “politically unacceptable, morally repugnant and should be prohibited by international law.”

The Campaign believes universities are to being unwitting accomplices in the birth of a deeply flawed weapon.

Recently, the campaign released its university report. Based upon student led research, the report details the findings of an investigation into 13 UK universities, reaching a number of conclusions about the involvement of these institutions in AWS development. It looked at current and previous projects linked to the MoD and private arms companies, finding 65 research projects that it believes are at risk of either being incorporated in AWS or furthering their development – even indirectly. That is to say that some projects were deemed to be furthering the development of enabling technology like sensors or facial recognition software. While these projects might not have autonomous weapons systems as their intended outcome, the campaign is concerned that their findings are being funnelled back to developers at the funding companies whose aim is to create such weapons.

The report highlights how vulnerable the Campaign believes universities are to being unwitting accomplices in the birth of a deeply flawed weapon. It states that there is ‘a disturbingly close relationship between the defence sector and UK universities’, and that there are ‘inadequacies and contradictions in universities’ ethics frameworks, a lack of safeguards for the end-user and dual-use risks associated with AWS-relevant technology and a concerning absence of transparency, accountability, or ethical consideration around research with potential military applications.

Subsequently, the campaign is working with student campaigners at the target universities. Their goal is to have universities follow the examples of UCL, Google DeepMind, Elon Musk, the European Association for AI, as well as a number of other institutions, researchers and politicians in signing the Future of Life Pledge. This would commit universities to neither supporting nor developing autonomous weapons systems. Additionally, campaigners are asking universities to produce a clear plan that details how they plan to adhere to these commitments. As the Campaign notes: ‘The overall aim is to ensure that academic institutions establish a clear policy detailing procedures to certify that their research and innovation will not contribute to the development or production of autonomous weapons systems, including by implementing tangible safeguards such as ethical guidelines and committees, risk assessment protocols, and contract clauses to protect against reverse engineering and unintended harmful uses.’

With a campaign at Warwick underway, only time will tell how responsive the university is to such requests.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.