In June, Mark Bridge reported that Google software engineers refused to work on a security feature to isolate and protect Pentagon data, because of moral concerns about the company helping the US to wage war. A dozen employees resigned in May according to Engadget and 4000 staff signed a petition against the project which was halted.
The unnamed engineers were then joined by like-minded staff In a protest against an existing contract – Project Maven – to develop drone technology, using Google’s artificial intelligence to scan military drone footage to identify people and vehicles (video here)). Although Google said that it would be used for “non-offensive purposes” only, workers feared it would be used to identify targets for drone strikes in countries such as Afghanistan, where strikes have caused civilian casualties.
Google decided to end its involvement with Project Maven in 2019 when its contract expires
Earlier in October, the Washington Examiner reported Google’s decision not to compete for a $10 billion Pentagon cloud-computing JEDI contract to improve the U.S. military’s leverage of artificial intelligence capabilities because the project might conflict with corporate limits on the use of its technologies, which include a pledge not to build weapons or other systems intended to cause harm.
The Tech Workers Coalition, an organization of industry employees concentrated in the San Francisco Bay area and Seattle whose members have expressed concern about the ethics of certain uses of artificial intelligence, said the decision was based primarily on “sustained employee pressure.” It alleged that Google had intended to compete for the contract and had ‘courted’ military officials extensively with the hope of winning such projects.
Google has issued admirable new ethical standards “Artificial Intelligence at Google: our principles”
Speaking to The Verge, an American technology news and media network, a Google representative said that had these principles been published earlier, the company would not have become involved in Project Maven which used AI to analyse surveillance footage. Although the application was described as being for “non-offensive purposes” and was therefore permitted under these guidelines, a company representative said that Google will continue to work with the military “in many other areas” but that particular project was ‘too close for comfort’.
The document makes clear that the company will not develop AI for use in weaponry and is thought to suggest that Google will ‘play it safe’ with future military contracts.