This 10 April 2018 video says about itself:
We’re Training Machines to be Racist. The Fight Against Bias is On
Face recognition software didn’t recognise Joy Buolamwini until she placed a white mask over her face. Now she’s leading the fight against lazily-coded [Amazon] algorithms that work for white males but struggle to recognise the faces and voices of women and people with non-white skin tones.
Translated from Dutch NOS TV today:
Can artificial intelligence be racist or sexist?
Governments use algorithms on an increasingly large scale to predict who will do something wrong, but also to determine what citizens need, the NOS reported yesterday. Something with a risk of discrimination.
But there is also a risk outside the government. Almost all self-driving cars, eg, recognize people with darker skin colour worse than people with lighter skin colour. That is the striking conclusion of a recent study. And dangerous, moreover, because cars then do not properly anticipate pedestrians of colour.
Striking, but not new, says Nieuws en Co-tech watcher Enaam Ahmed Ali. “We can no longer do without artificial intelligence, but it is full of prejudices.” …
Artificial intelligence in courts
But it goes further, says Enaam Ahmed Ali. For example, banks use it to assess whether you can apply for a loan. And in the US, for example, experiments are already being conducted with artificial intelligence in courts. And then those prejudices suddenly become really problematic.
Who is the doctor?
A well-known example, where you see those prejudices coming back, is, eg, the translation function of Google. You can do the following experiment yourself: type she is a doctor, he is a nurse and translate this into a grammatically gender-neutral language such as Turkish or Persian and translate back. Suddenly he is the doctor and she the nurse.
Black women not recognized
Ahmed Ali: “You recently saw that very well with Amazon‘s facial recognition tool. It showed that black women were not recognized as women in 37 per cent of cases. This was due to a lack of diversity in the data.”
And while the consequences of face recognition apps are small,
No, dear NOS, these consequences are not small. Eg, Amazon sells it facial recognition software to police. And in England, London police facial recognition software ‘recognizes’ 100% of innocent people as criminals. Err … maybe it is not as bad as 100%. British daily The Independent says it is ‘only’ 98% misidentifications. And a BBC report looks at this even more through rose-coloured glasses: ‘only’ 92% of innocent people ‘recognized’ as criminals …
they are big in the case of self-driving cars. Ahmed Ali: “Those cars see the black people, but do not recognize them as human in all cases. For the car, for example, it may also be a tree or pole. The danger is that a tree or pole does not suddenly cross over. In this case, the car can make a wrong and dangerous decision.”
And this also works with words. If the bulk of the data speaks of male doctors, then the Google algorithm also automatically links that together.
Edward Snowden: With Technology, Institutions Have Made ‘Most Effective Means of Social Control in the History of Our Species’. NSA whistleblower says “new platforms and algorithms” can have direct effect on human behavior: here.