Google Should Not Be In Business Of War, Say Employees
Thousands of Google employees have signed an open letter asking the internet giant to stop working on a project for the US military.
Project Maven involves using artificial intelligence to improve the precision of military drone strikes.
Employees fear Google's involvement will "irreparably damage" its brand.
"We believe that Google should not be in the business of war," says the letter, which is addressed to Google chief executive Sundar Pichai.
"Therefore we ask that Project Maven be cancelled, and that Google draft, publicise and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."
No military projects
The letter, which was signed by 3,100 employees - including "dozens of senior engineers", according to the New York Times - says that staff have already raised concerns with senior management internally. Google has more than 88,000 employees worldwide.
In response to concerns raised, the head of Google's cloud business, Diane Greene, assured employees that the technology would not be used to launch weapons, nor would it be used to operate or fly drones.
However, the employees who signed the letter feel that the internet giant is putting users' trust at risk, as well ignoring its "moral and ethical responsibility".
"We cannot outsource the moral responsibility of our technologies to third parties," the letter says.
"Google's stated values make this clear: every one of our users is trusting us. Never jeopardise that. Ever.
"Building this technology to assist the US government in military surveillance - and potentially lethal outcomes - is not acceptable."
'Non-offensive purposes'
Google confirmed that it was allowing the Pentagon to use some of its image recognition technologies as part of a military project, following an investigative report by tech news site Gizmodo in March.
A Google spokesperson told the BBC: "Maven is a well-publicised Department of Defense project and Google is working on one part of it - specifically scoped to be for non-offensive purposes and using open-source object recognition software available to any Google Cloud customer.
"The models are based on unclassified data only. The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work.
"Any military use of machine learning naturally raises valid concerns. We're actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies."
The internet giant is working on developing policies for the use of its artificial intelligence technologies.
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more