How accurate is the algorithm for predicting the personality of faceptions

Discriminatory facial recognition: I see what you are not

In many areas of our everyday life, we are assessed on the basis of our data traces. Algorithms calculate results that we don't even know how they come about, or where they will ultimately be used everywhere. Scoring, micro-targeting and face recognition - opaque systems everywhere make judgments about us. You don't have to put up with that, say Frederike Kaltheuner and Nele Obermüller. In their new book “Data Justice” they explain how one can defend oneself. Here you can read an excerpt from the book. Published with the kind permission of the authors and the publisher.

Data can never be 'raw' and data analysis systems cannot be 'objective'. Ultimately, it is always people who develop, control and use systems such as risk scores for very specific purposes. These tools can contain prejudices built into them that are sometimes difficult to determine because the systems are secret, protected by copyright or proprietary rights, or obscured by their inherent opacity. But sometimes, when a particular technology is dissected, the distortions it contains come to light. This is exactly what happened in the case of face recognition.

In 2015 Jacky Alciné tweeted that Google's new photo app had automatically titled pictures of him and his dark-skinned girlfriend as "Gorillas". A public outcry ensued, and Google immediately declared itself shaken and announced that they sincerely regretted the incident.

But Google Photos isn't the only facial recognition system that recognizes white faces better than any other. Joy Buolamwini from MIT Media Lab, for example, tested facial recognition software from Microsoft, IBM, and the Chinese company Face ++ and found that all systems were particularly good at recognizing the gender of fair-skinned men (if you put the three systems together, the average error rate was at 0.3 percent). However, dark-skinned men misclassified the three systems in 6 percent of the cases, women with darker skin color even in 30.3 percent of all cases (PDF).

Lack of variety of visuals

People are incredibly good at recognizing faces; so good, in fact, that we can discover them everywhere: in everyday objects, in clouds or the contours of a slice of bread. It takes a lot of time for a computer to learn to recognize faces - a process that is extremely complicated, but which has made significant strides over the past twenty years due to advances in machine learning. With face recognition, systems first recognize what a face is and then convert each recorded face into what is known as biometric data. This means that they measure facial features and transform them into a digital pattern.

There are different methods, but the main reason for the immense advances in technology in recent years are computer programs that machine learning. Using tens of thousands of images from different people, they teach themselves to recognize certain recurring patterns in faces. One of the reasons facial recognition software often recognizes white faces better than dark skinned ones is the lack of diversity in the imagery used to train the systems.

Jacky Alciné is a software developer himself. When he found himself and his girlfriend in an automatically created photo album entitled “Gorillas”, he tweeted: “I know HOW this happened; but the problem also lies in the WHY. "The problem of solving the problem is extremely complex, as Meredith Whittaker, co-director of the AI ​​Now institute, emphasized:" There is simply no ready-made instruction manual that describes how such problems can really be solved . «IBM, Microsoft and other companies have now begun to actively seek to remove learned distortions and prejudices in facial recognition. Google Photos is also trying to correct the situation. As Wired reported, the company avoided the problem two years after Alciné's discovery by simply removing the search term "gorilla" from the photo app.

Lack of human diversity

The mere facts that it took years to track down such serious problems, and that the person who did this was not even a Google employee, suggest that there is a lot more missing than representative image databases that machine face recognition can use is trained. The technical departments of Western digital corporations, as well as in IT itself, lack human diversity. That too has history. The racial bias in facial recognition finds a disturbing echo in color photography, which was originally optimized for lighter skin tones, making people with darker skin tones less noticeable.

Machine face recognition currently works best when it comes to recognizing white men. In practice this means that anyone who is ignorant and male is much more likely to be confused or go completely undetected. In sensitive contexts like law enforcement, this can involve people in crimes they have never committed. Even in seemingly everyday environments - from international sporting events to music concerts - an automated failure to recognize shifts the burden of proof to those who have not been recognized, as it is now they who have to identify themselves and justify themselves. They have to prove that they really are who they really are - and not who the system believes they are.

Cameras that recognize everyone and keep an eye on them at all times sound like an idea from the world of George Orwell's novel 1984, but the data-based surveillance only reminds of Orwell if it is accurate. If it fails, it is much more like Kafka's The Trial. Both dimensions can cause considerable damage - even more so if the monitoring is carried out by complex and often irresponsible socio-technical systems.

Social justice through fairer systems?

While it is important to uncover such (often unintentionally) programmed prejudices, even more just systems would not necessarily lead to more social justice. Black people in America and other marginalized groups are aware that they have always been the target of surveillance systems and discriminatory practices. In New York City, for example, so-called lantern laws in the 18th century required black, indigenous or enslaved people to carry lanterns with them when they - unaccompanied by a white person - walked through the city after sunset. As mentioned earlier, the FBI under J. Edgar Hoover kept extensive dossiers on social movements and political dissidents - but African-Americans were almost universally suspected and monitored.

Against this background, software developer Nabil Hassein raises the question of whether we should even clear facial recognition of its learned prejudices, if such an improvement only leads to black people being better recognized by institutions such as the police and the secret services. In his essay “Against Black Inclusion in Facial Recognition”, which was published on the Decolonized Tech blog, Hassein writes: “I see no reason to support the development or use of technologies that make it easier for the state to become members of my community categorize and monitor. ”The point is that the (in) accuracy and systematic bias of technologies like facial recognition is just one way marginalized communities experience new, automated discrimination. The purposes for which facial recognition is developed and the way in which it is mostly used are closely linked to systemic injustices and their institutionalization.

From 2018, travelers flying from Dubai with Emirates will no longer have to queue at passport controls or e-gates. You can easily and conveniently walk through a tunnel that shows high resolution images of an aquarium and is equipped with 80 cameras that scan both the face and the iris. The aquarium design encourages travelers to look around so the technology can capture their face more precisely. When you get to the other end of the tunnel, a friendly, glowing green message welcomes you in case you are allowed to pass. If the system does not recognize a person or marks a person as suspicious, a red message is given. The example shows that the promise of face recognition is an unequal one. For white, privileged and wealthy people, facial recognition provides a smooth airport experience and the convenience of paying with your own face. For those who are already marginalized anyway, it could mean exactly the opposite: invisible exclusion and automated discrimination.

Comfortable comfort in the background

Essentially, face recognition is a biometric identification from a distance. When a person's fingerprints or a blood sample are taken, they usually agree to share their biometric data; unless the authorities have reason to believe that she has committed a crime. Face recognition, on the other hand, often scans people's faces without their consent or knowledge. I live in London where CCTV cameras are ubiquitous. In the summer of 2018, a construction site sign next to the Privacy International office in Clerkenwell announced that the new Elizabeth subway station would have “free WiFi, air conditioning and CCTV”. The British public seems to have internalized the idea of ​​constant observation to such an extent that it perceives monitoring in a way that is similar to temperature regulation: as a comfortable comfort in the background.

If you activate Apple Pay on your iPhone X in London, you can now pay for almost everything with your face: subway rides, pizza delivery to your home or weekend shopping in the supermarket. But even in a place as heavily monitored as the United Kingdom, face recognition across the board is encountering increasing resistance. The human rights organization Liberty, for example, is currently suing against the police's use of automated facial recognition. In America, Amazon and Microsoft employees drafted letters of protest in the summer of 2018 to stop the sale of facial recognition software to US authorities. In New York, the artist and engineer Adam Harvey developed a make-up technique - consisting of cubist shapes overlying characteristic facial features - that prevents facial recognition algorithms from accessing biometric profiles.

A comprehensive introduction of functional, automated face recognition would mean nothing less than the end of anonymity in public spaces. The associated dangers are obvious: Find Face, a face recognition app introduced in Russia at the beginning of 2016, allows its users to take photos of people in a crowd and to compare these images with profile pictures from the popular Russian social network VKontakte. According to its own information, the app is able to identify people's online profiles with a reliability of 70 percent. Allegedly the product was designed as a "tool to make new friends," but there have been numerous instances where Find Face has been used to "out" porn actors and sex workers and expose and harass their friends or families. Here, too, the pattern is always the same: the same product that some find friends with puts others in serious danger.

Dangerous pseudoscience

Meanwhile, in Tel Aviv, a start-up called Faception claims that it can read a person's character from their face. The company claims it has trained a program of face images from a variety of sources, including photos from the Internet, live video streams, and criminal photos, to generate proprietary classifiers - including a "high IQ," "academic researcher," "terrorist" , "Pedophile", "economic criminal", "poker player" and "brand promoter" - to generate. There is, of course, no evidence that even a single one of these categories of facial features can be derived. As a result, the company's sales pitch has been repeatedly criticized as a dangerous pseudoscience.

Faception may be a particularly dubious company, but there is a certain similarity between such automated predictions and the idea that CCTV cameras use artificial intelligence to automatically classify certain behavior as "suspicious", "dangerous", or "violent". Both are based on a deterministic worldview, and companies around the world are currently working to develop both.

Frederike Kaltheuner, Nele Obermüller (authors), from the English by Felix Maschewski, Anna-Verena Nosthoff: Daten Gerechtigkeit, Nicolai Publishing & Intelligence GmbH, Berlin, 112 pages, ISBN: 978-3964760111, 20 euros. Release date: October 23, 2018. Also available as an e-book.

Would you like more critical reporting?

Our work at netzpolitik.org is financed almost exclusively by voluntary donations from our readers. With an editorial staff of currently 15 people, this enables us to journalistically work on many important topics and debates in a digital society. With your support, we can clarify even more, conduct investigative research much more often, provide more background information - and defend even more fundamental digital rights!

You too can support our work now with yours Donation.

About the author

Guest Post

Guest contributions are contributions from people who do not belong to the netzpolitik.org editorial team. Sometimes we approach authors and publishers to ask them about guest contributions, sometimes the authors approach us. Guest contributions do not necessarily reflect the opinion of the editors.
Published 11/10/2018 at 9:00 AM