Artificial intelligence, racism and sexism


This 10 April 2018 video says about itself:

We’re Training Machines to be Racist. The Fight Against Bias is On

Face recognition software didn’t recognise Joy Buolamwini until she placed a white mask over her face. Now she’s leading the fight against lazily-coded [Amazon] algorithms that work for white males but struggle to recognise the faces and voices of women and people with non-white skin tones.

Translated from Dutch NOS TV today:

Can artificial intelligence be racist or sexist?

Governments use algorithms on an increasingly large scale to predict who will do something wrong, but also to determine what citizens need, the NOS reported yesterday. Something with a risk of discrimination.

But there is also a risk outside the government. Almost all self-driving cars, eg, recognize people with darker skin colour worse than people with lighter skin colour. That is the striking conclusion of a recent study. And dangerous, moreover, because cars then do not properly anticipate pedestrians of colour.

Striking, but not new, says Nieuws en Co-tech watcher Enaam Ahmed Ali. “We can no longer do without artificial intelligence, but it is full of prejudices.” …

Artificial intelligence in courts

But it goes further, says Enaam Ahmed Ali. For example, banks use it to assess whether you can apply for a loan. And in the US, for example, experiments are already being conducted with artificial intelligence in courts. And then those prejudices suddenly become really problematic.

Who is the doctor?

A well-known example, where you see those prejudices coming back, is, eg, the translation function of Google. You can do the following experiment yourself: type she is a doctor, he is a nurse and translate this into a grammatically gender-neutral language such as Turkish or Persian and translate back. Suddenly he is the doctor and she the nurse.

Black women not recognized

Ahmed Ali: “You recently saw that very well with Amazon‘s facial recognition tool. It showed that black women were not recognized as women in 37 per cent of cases. This was due to a lack of diversity in the data.”

And while the consequences of face recognition apps are small,

No, dear NOS, these consequences are not small. Eg, Amazon sells it facial recognition software to police. And in England, London police facial recognition software ‘recognizes’ 100% of innocent people as criminals. Err … maybe it is not as bad as 100%. British daily The Independent says it is ‘only’ 98% misidentifications. And a BBC report looks at this even more through rose-coloured glasses: ‘only’ 92% of innocent people ‘recognized’ as criminals

they are big in the case of self-driving cars. Ahmed Ali: “Those cars see the black people, but do not recognize them as human in all cases. For the car, for example, it may also be a tree or pole. The danger is that a tree or pole does not suddenly cross over. In this case, the car can make a wrong and dangerous decision.”

And this also works with words. If the bulk of the data speaks of male doctors, then the Google algorithm also automatically links that together.

Edward Snowden: With Technology, Institutions Have Made ‘Most Effective Means of Social Control in the History of Our Species’. NSA whistleblower says “new platforms and algorithms” can have direct effect on human behavior: here.

6 thoughts on “Artificial intelligence, racism and sexism

  1. Pingback: Artificial intelligence, racism and sexism — Dear Kitty. Some blog | Modern AfroIndio Times

  2. Pingback: Artificial intelligence, racism and sexism — Dear Kitty. Some blog | Indiĝenaj Inteligenteco

  3. Pingback: Dodgy facial recognition in Detroit, USA | Dear Kitty. Some blog

  4. Pingback: FBI, ICE dodgy facial recognition technology | Dear Kitty. Some blog

  5. Pingback: Amazon, Microsoft killer robots threaten | Dear Kitty. Some blog

  6. The ethics of artificial intelligence and automation amid a global pandemic

    Will a technological revolution entrench inequalities and lead to job losses, or can it be used to assist in delivering healthcare, asks OLIVIA BRIDGE

    BRITAIN’S healthcare crisis has been catapulted centre stage recently as our beloved NHS warriors battle Covid-19 deprived of personal protective equipment.

    Yet these indispensable heroes have long toiled exhausting hours, struggling under a lethal concoction of heightened demand and a lack of resources, a Brexodus of staff and the coronavirus pandemic is just icing on the crumbling cake.

    Vacancies are widespread. In social care, it is estimated shortages have spiralled to 122,000.

    https://morningstaronline.co.uk/article/f/ethics-artificial-intelligence-and-automation-amid-global-pandemic

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.