Google publishes ethical code for AI following Project Maven fallout

Artificial intelligence debate flares at Google

Google CEO bans autonomous weapons in new AI guidelines

The guidelines come after an internal and external backlash to the use of artificial intelligence technology in a contract Google signed past year with the Department of Defense, known as Project Maven. Staff protests forced Google to retreat from the contract last week.

Still, the CEO was careful to note that Google still plans to work with the military "in many other areas".

Google's drone-footage surveillance program that got them into this mess won't be renewed next year, after multiple staff resignations and an employee petition signed by thousands.

Google has pledged to never work on artificial intelligence weapons projects, laying down the principle after a collaboration with the USA military fomented an employee revolt.

Google's goals for the programme are to be socially beneficial, avoid creating or reinforcing unfair bias, be built and tested for safety, accountable to people, incorporate privacy design principles and to uphold high standards of scientific excellence.

"I would like to be unequivocal that Google Cloud honors its contracts", Greene said in the blog post, adding that Google would fulfill the contract in a way that's "consistent" with the company's AI principles.

Ryan Gosling Is Neil Armstrong in 'First Man' Trailer & Poster
The film will reportedly focus on Armstrong's life from 1961-1969. Hansen's 2005 book First Man: The Life of Neil A. Check out the First Man trailer below.

Only weapons that have a "principal purpose" of causing injury will be avoided, but it's unclear which weapons that refers to.

It will continue to make high-quality information for the mass using AI with respect to cultural, social, and legal norms in the countries where they operate. We will work to limit potentially harmful or abusive applications. Pichai, in his blog, has specified that Google will not use AI in weaponry, surveillance and other areas where the implementation is likely to cause overall harm.

Google has a set of new rules that look good on paper.

It's interesting that Google mentioned worldwide human rights laws here, because just recently, the United Nations' Special Rapporteur called on technology companies to implement global human rights laws by default into their products and services, instead of their own filtering and censorship rules, or even the censorship rules of certain local governments. These fields will mainly include cybersecurity, healthcare, and training. As we can see, that sort of belief is what caused its employees to think that it was a bad idea for the company to be in the "business of war", as they said.

However, it is clear that the company won't work directly on weapons or other technologies that are created to cause injury to people. It will also not use it for technologies that will enable surveillance and any other general violation of the principles of worldwide law or human rights. Google then reportedly told its staff it would not bid to renew the contract, for the Pentagon's Project Maven, after it expires in 2019.

Project Maven was Google's collaboration with the US Department of Defense.

Latest News