Google removes pledge not to use AI for weapons or surveillance from its ethical guidelines

In 2018, Google established guidelines for its AI operations emphasizing safe and impartial usage across multiple applications. However, it appears that this week, Google has relaxed significant restrictions on the use of its AI technology in areas such as weapons development, surveillance, and other potential harmful applications.

The Washington Post reported on changes made to Google’s AI guidelines, which were archived by the Internet Archive. Upon examination, it was noticed that an entire section titled “AI applications we will not pursue” had been eliminated from the updated webpage. This section contained a commitment from Google not to utilize AI for purposes that could cause harm, such as weapons or technologies primarily designed to inflict injury upon people. Additionally, references to using “technologies that collect or employ information for surveillance that violate internationally recognized standards” were also omitted in the revised page.

When asked about recent changes, Google steered questions towards a recently published blog. This blog was penned by James Manyika, Senior Vice President at Google Research Labs, and Demis Hassabis, CEO of Google Deepmind. In the blog, they suggest that as AI technology grows in today’s world, AI companies should be ready to offer their tools to governmental and security clients according to Google.

It’s not clear why Google would go back on its promise to ensure safety with its technology, especially since it keeps investing billions in AI. However, given these actions and apparent benefits, it seems that growth and expansion are the main focus for the company. Keep an eye out for more developments as we monitor AI advancements.

Read More

2025-02-05 18:27