Facial Recognition AI: how it works, and why it fails
image credit: unsplash

Facial Recognition AI: how it works, and why it fails

Facial recognition went mainstream in 2018 with the introduction of iPhone X, which replaced Apple’s signature TouchID system with advanced facial recognition biometrics. For the most part, FaceID works well for its users —but even there mistakes occur. For one, my face unlocks my identical twin’s phone. This is funny, but on the large, flaws in facial recognition algorithms are unfortunately nothing to laugh about. 

Phone users are aware that they are using facial recognition to unlock their phone, but many people forget that nowadays the technology is used for a variety of purposes. CCTV cameras monitor public spaces, and facial recognition technologies are increasingly used for surveillance, security, and law enforcement. Naturally, facial recognition technology used to identify faces on a dark and grainy CCTV footage differs from the iPhone system, but the core mechanism remains the same. 

AI is also used by the military. Famously, in 2011 the US military used facial recognition technology to identify a slain al Qaeda leader Osama bin Laden. The Face Recognition System was launched for INTERPOL at the end of 2016, and has since helped to successfully identify over 650 criminals, fugitives, persons of interest or missing persons. These are success stories that show how AI can support various organisations.

However, because of its wide use, mistakes of the algorithm could have grave implications. Robert Julian-Borchak Williams’s arrest in January this year is believed to be the first case of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm. It is a warning sign for how facial recognition AI may be misused by law enforcement.

The case started when after a shoplifting crime in Detroit, the store’s surveillance video was sent to the police in the hope to identify a suspect. A digital image examiner for the Michigan State Police then uploaded a still from the video to the state’s facial recognition database, and had the system search for a potential match in a collection of 49 million photos — using technology supplied for $5.5 million by a company called DataWorks Plus. The face recognition algorithm pointed to Mr. Williams’s driver’s license photo. Except that the system got it wrong, and an innocent man was detained for 30 hours.

In order to understand why AI might struggle to correctly identify faces, we need to remember that it is a very a complex task for a machine—  easy to sideline since it comes so naturally to most people. In fact, our brain is so well trained to do this that it has proven very challenging for artificial intelligence to match our accuracy, which averages at around 97.53%. The part of the human brain specialised in facial recognition is called the fusiform face area, and it has been critical to the survival of our species. It allows us to gather information about a person within a fraction of a second of viewing a face.

The main factors that cause difficulty in developing a reliable technology for facial recognition have been termed the A-PIE Problem: Ageing, Pose, Illumination, Emotions. Due to these factors, the human face appears as perpetually changing, and could confuse an algorithm. In order to overcome this, sensors divide the face into nodal points, such as the eye socket, distance between the eyes, or nose width. These measurements create a unique code, which is the person’s own face print. This goes beyond facial detection technology — used for example for Snapchat filters — which can distinguish a human face but does not match it to any identity.

There is one thing algorithms do inherently better than most people, and that is learn from past mistakes. Each time an algorithm matches two faces correctly or incorrectly, it remembers the steps and creates a roadmap, adding more and more connections. This is how deep-learning algorithms work. The machine is picking up past patterns and repeating them. This is is key to artificial intelligence: when a research group at Facebook created DeepFace, a learning facial recognition system, it trained the algorithm on four million images uploaded by Facebook users. The algorithm became more effective, reaching near-human accuracy levels.

This is why the data that algorithms are trained on are crucial — it is directly linked to the way they function. In her book, the Weapons of Math Destruction, data scientist Cathy O’Neil dispels the myth that algorithms are objective and fair by nature. For example, AI can have a heightened inaccuracy for certain demographic groups, leading to bias.

Ethnic minorities and women are especially often underrepresented in data used to train the algorithm on, and hence a resulting technology works less accurately for these groups. In 2019, a federal study of over 100 facial recognition systems found that they falsely identified African-American and Asian faces 10 times to 100 times more than Caucasian faces. This is a huge difference. Such bias might not come from malicious intentions, but is ingrained in the system.

At the end of the day, algorithms don’t make decisions; it is humans.  AI should be thought of as a tool, not a silver bullet solution to our problems. Had the police been more cautious with its use, Mr. Williams would not have been wrongfully arrested. On top of the file that incorrectly matched him with the shoplifter it was written: “this document is not a positive identification. It is an investigative lead only and is not probable cause for arrest.” Hence, the police should have checked if he had an alibi before jumping to conclusions so quickly.

Facial recognition technology might look like science-fiction magic, but we should be aware of its limits. These could be minimised through sharing and implementing best practices for how to use AI correctly. When that happens, the technology could help us increase accuracy and create a safer world.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.