Google removes the commitment not to use AI for weaponry from its website.
This week, Google removed from its website a commitment not to develop artificial intelligence for weaponry or surveillance. This change was first noticed by Bloomberg. The company...
Google has made significant changes to its approach to artificial intelligence (AI) this week by removing from its website the commitment not to develop AI technologies for military or surveillance purposes. This modification was initially detected by a media outlet.
On the updated AI principles page of the company, a section titled "applications we will not pursue" has been removed, which was still present until last week. In response to a request for comment, the company redirected to a new blog addressing the topic of "responsible AI." In this article, it states that the company believes "companies, governments, and organizations that share these values should collaborate to create AI that protects people, promotes global growth, and supports national security."
Google's updated AI principles also highlight the company's intention to "minimize unintended or harmful outcomes and avoid unfair biases," in addition to aligning with "widely accepted principles of international law and human rights." In recent years, Google's contracts to provide cloud services to the militaries of the United States and Israel have generated internal protests among its employees. While the company has defended that its AI is not used to cause harm to humans, the head of artificial intelligence at the Pentagon recently hinted that some Google AI models are accelerating the military attack process in the United States.