Internal Google documents have revealed that the company's lawyers and consultants had concerns about potential human rights violations linked to its $1.2 billion cloud contract with the Israeli government, dubbed Project Nimbus. The contract, which was signed in 2021, has been a flashpoint sparking protests by employees who believe it may involve them in violence against Palestinians.
The documents, prepared for Google executives and reviewed by The New York Times, show that the company was worried about whether the contract might be bad for its reputation. Specifically, Google lawyers, members of the company's policy team, and outside consultants wrote that "Google Cloud services could be used for, or linked to, the facilitation of human rights violations, including Israeli activity in the West Bank."
Despite these concerns, Google ultimately decided to move forward with the deal, which gives the Israeli government access to cloud services from Google and Amazon. The contract includes the use of AI tools to analyze and identify objects in images and videos, as well as videoconferencing and services to store and analyze large amounts of data.
The most profitable part of the deal is $525 million from Israel's Ministry of Defense, expected between 2021 and 2028. While this is a significant sum, it is not a huge amount for Google, which reportedly made $258 billion in sales in 2021. However, it is enough to give the company some clout with other potential military and intelligence customers.
Google has defended the deal, stating that "the Nimbus contract is for workloads running on our commercial cloud by Israeli government ministries, who agree to comply with our Terms of Service and Acceptable Use Policy. This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services." However, separate Israeli government contract documents suggest that Project Nimbus is subject to "adjusted" terms of service rather than Google's general terms of service.
In the months leading up to the contract, Google sought input from consultants, including the firm Business for Social Responsibility (BSR). Consultants recommended that the contract bar the sale and use of its AI tools to the Israeli military "and other sensitive customers," and that Google add its AI principles that prohibit surveillance or weapons to the contract. However, the contract reportedly did not reflect these recommendations.
Google's concerns about the contract went beyond just human rights violations. The company also worried that it would be forced to accept "onerous" risks, such as the possibility that it could run into conflicts with foreign or international authorities if they sought Israeli data, and that it might have to "breach international legal orders" under the deal terms.
The controversy surrounding Project Nimbus has only intensified since the Israel-Hamas war, which has killed more than 44,000 people in Gaza. Google has fired roughly 50 employees for their alleged involvement in protests against the contract. In response to the protests, Billy Van Der Laar, a Google software engineer, said, "We did not come to Google to work on technology that kills. By engaging in this contract, leadership has betrayed our trust, our AI Principles, and our humanity."
The revelations about Google's internal concerns about Project Nimbus raise important questions about the company's commitment to human rights and its willingness to prioritize profits over ethical considerations. As the tech industry continues to grapple with the implications of its work on human rights, the controversy surrounding Project Nimbus serves as a stark reminder of the need for greater transparency and accountability.