Logo

Logo

Google will not use AI to build weapons: CEO Sundar Pichai

Statement comes after the tech giant faced backlash over its involvement in “Maven”, an AI-powered Pentagon project. Google has now decided not to renew the project with the US Defence Department after expires in 2019

Google will not use AI to build weapons: CEO Sundar Pichai

Sundar Pichai. (Photo: IANS)

Google CEO Sundar Pichai has said the company will not use Artificial Intelligence — or AI — in “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”.

The statement comes after the tech giant faced backlash over its involvement in “Maven”, an AI-powered Pentagon project.

Nearly 4,000 Google employees had signed a petition demanding “a clear policy stating that neither Google nor its contractors will ever build warfare technology”.

Advertisement

The “Maven” AI project with the US Defence Department expires in 2019, and Google has decided not to renew it.

In a blog posted late Thursday, Pichai emphasised that Google would not design or deploy AI in “technologies that cause or are likely to cause overall harm”, adding: “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.”

He also said Google would not pursue AI in “technologies that gather or use information for surveillance violating internationally accepted norms” and “technologies whose purpose contravenes widely accepted principles of international law and human rights”.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue,” said Pichai.

Stating that how AI is developed and used will have a significant impact on society, the Indian-born Google CEO posted: “As a leader in AI, we feel a deep responsibility to get this right.”

Pichai described seven AI “principles” in his post, and said, “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”

He said Google would incorporate its privacy principles in the development and use of its AI technologies. “We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data,” Pichai said in the post.

He said Google would strive to make high-quality and accurate information readily available using AI, while “continuing to respect cultural, social, and legal norms in the countries where it operates”.

Pichai also said Google would design AI systems that would provide appropriate opportunities for feedback, relevant explanations, and appeal. “Our AI technologies will be subject to appropriate human direction and control,” he added.

“We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief,” Pichai noted.

Advertisement