AI
AI

Google Now Endorses the Use of AI in Weapons and Surveillance Efforts

Photo credit: www.engadget.com

Google Revises AI Principles Amid Changing Ethical Landscape

Google has recently implemented significant updates to its AI principles, a document first established in 2018. This alteration was brought to light by The Washington Post, which noted the removal of specific commitments previously made regarding the deployment of AI technologies in weaponry and surveillance systems.

In the latest version of the principles, the earlier section titled “applications we will not pursue” has been eliminated. Instead, a new heading, “responsible development and deployment,” has emerged. Here, Google asserts an intention to ensure “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights.”

This change represents a significant shift from the more stringent commitments noted just a month prior. Previously, Google vowed not to design AI intended for “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.” The old guidelines also included a provision against developing surveillance technologies that contravene “internationally accepted norms.”

Responding to inquiries regarding these changes, a Google spokesperson referred Engadget to a blog post released by the company. In this communication, DeepMind CEO Demis Hassabis and James Manyika, Google’s senior vice president for research, outlined that the evolution of AI into a “general-purpose technology” prompted the policy revision.

The executives emphasized, “We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. Companies, governments, and organizations that share these values should collaborate to develop AI that ensures safety, fosters global growth, and bolsters national security.” They underscored a commitment to align AI pursuits with established ethical guidelines while continuously weighing the potential benefits against possible risks.

When initially published in 2018, Google’s AI principles emerged as a response to backlash over Project Maven, a contentious contract that would have involved the company supplying AI technology to the Department of Defense for analyzing drone videos. This project sparked considerable employee unrest, resulting in employee resignations and widespread protests against the contract. CEO Sundar Pichai expressed hope that the principles would endure through time.

However, the landscape shifted by 2021 when Google resumed interest in military contracts, actively competing for the Pentagon’s Joint Warfighting Cloud Capability project. Notably, earlier this year, The Washington Post reported instances of Google personnel working closely with Israel’s Defense Ministry, focusing on the enhancement of AI capabilities within the military sector.

Source
www.engadget.com

Related by category

Samsung Collaborates with GSMA to Default VoLTE on Galaxy Phones Featuring One UI 7

Photo credit: www.gadgets360.com Samsung has collaborated with GSMA to enhance...

FTC vs. Meta: Live Updates on the Fight for Instagram and WhatsApp

Photo credit: www.theverge.com The highly anticipated antitrust trial involving Meta...

Unlocking the Power of a Single AI Prompt to Transform Your Marketing Strategy

Photo credit: www.geeky-gadgets.com Imagine a single action that could elevate...

Latest news

First 100 Days of Trump Presidency: Courts Swamped with Lawsuits on Deportation, Transgender Issues, and Tariffs

Photo credit: www.cbc.ca As Donald Trump reaches the milestone of...

Review of the Google Pixel 9a – Testing by GSMArena.com

Photo credit: www.gsmarena.com Introduction The "A-series" devices from Google continue to...

Breaking news