This 29 March 2019 video says about itself:
A global ban on so-called killer robots – lethal autonomous weapons that can hunt and kill without a human involvement – has drawn the attention of tech giants and their role in developing military artificial intelligence that might cross an ethical line.
Al Jazeera’s Science and Technology Editor Mereana Hond reports.
From the site of Dutch Christian peace organisation Pax, 19 August 2019:
Don’t be evil?
A survey of the tech sector’s stance on lethal autonomous weapons
This report investigates the role of the global tech sector in the development of lethal autonomous weapons, or killer robots.
Tech companies involved in making relevant technologies should put in place policies to make sure their products do not (unintentionally) contribute to the development of killer robots. Some companies, including Google, are already taking positive steps to this end,
but others such as Amazon and Microsoft have so far refused to say no to killer robots. The report profiles 50 companies from around the world and ranks them low- to high-risk based on their public statements, policies and whether they have relevant military contracts.
Download the report Don’t be Evil?
Read the summary.
Read the highlights.