By David Beck, SWR
It works for white men, but not for black women: facial recognition using AI – a now classic problem in AI research. In her Ted Talk, Joy Boulamwini shows the results of a self-experiment: Face recognition software just doesn't recognize her face. Only when she puts on a white mask that is reduced to the most basic features of a face – mouth, nose, eyes – does the software recognize a face.
And if it works, then more badly than right. Boulamwini examined various facial recognition software, including those from IBM and Microsoft. Result: The software recognized all faces, but made significantly more errors in women and non-white people when it came to determining the person's gender. For white men, the error rate was about one percent, for black women it was 35 percent.
Development teams not diverse enough
The problem is not the technology itself, but who develops it and how it is trained. Because there are an above-average number of white men who develop artificial intelligence. The proportion of women in the German IT industry was just 18 percent in 2021.
Although 25 percent of the students in this field are women, young, working female programmers often report an unpleasant, male-dominated work environment and some give up their jobs again. And migrants are also underrepresented. In other countries the situation is somewhat better, but not decisive.
Other groups are forgotten
As a result, AI developers often just don't think about these groups. For example, they look at their own faces and ask themselves, "How do I know this is a face?" In doing so, they may overlook features that were important in recognizing a female or black face. The resulting face recognition software therefore recognizes white, male faces best.
According to Ksenia Keplinger, group leader at the Max Planck Institute for Intelligent Systems, this is not necessarily because developers want to exclude women or minorities, but simply because developers simply don't think about developing their AIs for other groups as well.
Data sets not diverse enough
According to Keplinger, another reason for prejudice in artificial intelligences is the data sets with which they are trained. For example, to train a face recognition AI, it is shown thousands or tens of thousands of images of faces. This is how she learns what features make up a face. Such records can be bought.
The problem with this is that the data sets are usually not diverse enough. White men are often massively overrepresented. This is how the AI learns to best recognize these faces again.
Learning from the past is problematic
If an AI is trained with historical data, then the AI adopts the prejudices that are in the historical data. For example, an algorithm that is supposed to decide on applications would be trained with data on which applications were successful in the past. In this way he would learn what is important to the company. But in almost every industry men have been preferred in the past. For the AI, men were then per se the better applicants. Migrants who were disadvantaged were the "worse" applicants and were further disadvantaged.
Four eyes principle between man and machine
With better data sets and more diverse development teams, a more neutral AI could be created. But a completely unbiased AI is not possible, Keplinger believes. Since humans will always have prejudices, AIs will always have them too.
Therefore, says Keplinger, we must not blindly trust an artificial intelligence. Where possible, people should check the results of artificial intelligence, a kind of four-eyes principle between man and machine.