Legal
Image: Unsplash

The ethics of robots in the workplace

It is predicted that, by 2025, robots and machines driven by artificial intelligence (AI) will perform half of all productive functions in the workplace – companies already use robots across many industries, but the sheer scale is likely to prompt some new moral and legal questions. Machines currently have no protected legal rights but, as they become more intelligent and act more like humans, will the legal standards at play need to change? To answer this question, we need to take a good hard look at the nature of robotics and our own system of ethics, tackling a situation unlike anything the human race has ever known.

The state of robotics at the moment is so comparatively underdeveloped that most of these questions will just be hypotheticals that will be nearly impossible to answer. Can, and should, robots be compensated for their work, and could they be represented by unions (and, if so, could a human union truly stand up for robot working rights, or would there always be an inherent tension)? Would robots, as workers, be eligible for holiday and sick leave? If a robot harms a co-worker, who would be responsible? If a robot invents a new product in the workplace, who or what owns the intellectual property?

Machines currently have no protected legal rights but, as they become more intelligent and act more like humans, will the legal standards at play need to change? 

Can robots be discriminatory, and how should it be dealt with? Amazon was developing a recruitment engine to find top talent and make employing new people more efficient – the company found that the AI system had developed a bias against female candidates. It was trained to screen applications by observing patterns in old CVs – one of those patterns was that they were mostly submitted by men, and so the machine trained itself to vet out female applicants. This was certainly not Amazon’s intention, but it shows how robots can learn negative attitudes based simply on their programming. And if a robot was sexist to a co-worker, how should it be dealt with?

One of the key questions linked to AI is whose intelligence we’re talking about. Avani Desai, the principal and executive vice president of independent security and privacy at Schellman & Company, uses the example of autonomous cars: “We have allowed computers to drive and make decisions for us, such as if there is a semi coming to the right and a guard rail on the left, the algorithm makes the decision what to do.” But things, he suggests, may not be that simple. “Is it the car that is making the decision or a group of people in a room that discuss the ethics and the cost, and then provided to developers and engineers to make that technology work?”

Robots can learn negative attitudes based simply on their programming

This dilemma is a massive legal issue – if there was litigation related to a decision that a robot made, who would face trouble? Would it be the engineer, the manufacturer, the retailer, or the robot itself? And, if it were the robot, what steps could the legal system conceivably take to deal with it?

This discussion of robot rights may seem a way off, but there is some precedence. In late 2017, Saudi Arabia granted citizenship to a humanoid robot named Sophia, developed by the Hong Kong-based Hanson Robotics. The actual motivation for this was a simple PR stunt, intended to promote an IT conference, as so the actual nature of those rights, or what the move may mean for other robots, remains unclear. And how Sophia has used that citizenship is a mixed picture – Sophia spoke out, taking advantage of its profile to campaign for women’s rights in Saudi Arabia but, in a CNBC interview with its creator, Sophia also expressed a desire to “destroy all humans.” Such destruction is also echoed in the rise of weapons that can think for themselves, something that is really alarming activists and some top military personnel.

This discussion of robot rights may seem a way off, but there is some precedence

That same year, a European Parliament legal affairs committee recommended a resolution that creates a special legal status of ‘electronic persons’ for the most sophisticated robots, but the idea was incredibly controversial among politicians, scientists, business people and legal experts. The idea, according to its proponents, is common sense – legal personhood would not grant robots the same rights as humans, but they would be considered ‘legal persons’, on par with corporations. 156 AI experts from 14 European countries disagreed, claiming it would be “inappropriate” from a “legal and ethical perspective”, and Noel Sharkey, emeritus professor of AI and robotics at the University of Sheffield, suggested that “by suggesting legal personhood for robots, manufacturers were merely trying to absolve themselves of responsibility for the actions of their machines.”

Advances in robotics and AI mean that these issues are only going to become more pressing in the future, and we really are entering a legally and morally unprecedented time when dealing with them. And, of course, it links to a further question that is really at the heart of this issue – when we discuss rights for robots, are we doing it in their interest, or our own? Once we’ve taken the lid off this debate, who knows where it will end up?

Comments

Comments are closed here.