This 26 July 2019 video from the USA says about itself:
How police manipulate facial recognition
Police across the country are using facial recognition to check IDs and find suspects — but are they using it the right way? A new study from Georgetown Law’s Center on Privacy & Technology suggests even good algorithms can be put to bad uses, particularly once police start getting creative with the images.
Read more here.
FACIAL RECOGNITION FALSELY IDS LAWMAKERS AS CRIMINALS The ACLU’s Northern California branch released its findings from running photos of all 120 California state legislators against a database of 25,000 publicly available mugshots using common facial recognition software. The software identified 26 state legislators ― more than one in five ― as criminals. And a disproportionate number of those lawmakers were people of color. [HuffPost]
The New York Times published a lengthy profile on Sunday of a company called Clearview AI that has developed a breakthrough facial recognition app reportedly being used by more than 600 law enforcement agencies across the US: here.
A METROPOLITAN Police scheme to snoop on Londoners using live facial-recognition (LFR) technology is “dangerous” and a “threat to human rights,” privacy campaigners warned today. Warnings flooded in after the Met announced it would begin deploying the technology, which has failed multiple trials, across the capital’s streets and claimed the measure would help fight serious crime: here.
Internal documents leaked to The Intercept show that the European Union (EU) is creating the legislative framework for implementing an international facial recognition database that will likely be integrated with a similar system already in place in the US: here.