Google Updates AI Policy to Allow Automated Decisions in High-Risk Domains with Human Oversight

Elliot Kim

Elliot Kim

December 17, 2024 · 4 min read
Google Updates AI Policy to Allow Automated Decisions in High-Risk Domains with Human Oversight

Google has revised its Generative AI Prohibited Use Policy to explicitly allow customers to deploy its AI tools for making automated decisions in high-risk domains, such as healthcare, employment, and insurance, provided that a human supervisor is involved. This update brings clarity to the company's stance on using its AI for decision-making in sensitive areas.

The revised policy, published on Tuesday, specifies that customers can utilize Google's generative AI to make "automated decisions" that could have a "material detrimental impact on individual rights," as long as a human is supervising the process. This means that Google's AI can be used to make decisions about loan approvals, job candidate screening, and other high-stakes applications, as long as a human is involved in the decision-making loop.

In the context of AI, automated decisions refer to decisions made by an AI system based on both factual and inferred data. For instance, an AI system might make an automated decision to award a loan or screen a job candidate. Google's updated policy aims to provide more transparency and guidance for customers using its AI tools in high-risk domains.

A Google spokesperson clarified that the human supervision requirement was always part of the company's policy, and the update is intended to provide more explicit examples and clarity for users. This move is seen as a significant shift in Google's approach to AI governance, as it acknowledges the potential risks and benefits of using AI in high-stakes decision-making.

In contrast, Google's top AI rivals, OpenAI and Anthropic, have more stringent rules governing the use of their AI in high-risk automated decision making. OpenAI prohibits the use of its services for automated decisions relating to credit, employment, housing, education, social scoring, and insurance. Anthropic, on the other hand, allows its AI to be used in law, insurance, healthcare, and other high-risk areas for automated decision making, but only under the supervision of a "qualified professional" and with disclosure requirements.

The use of AI in high-risk decision-making has attracted scrutiny from regulators and advocacy groups, who have expressed concerns about the technology's potential to perpetuate bias and discrimination. Studies have shown that AI used in credit and mortgage applications, for example, can perpetuate historical discrimination. The nonprofit group Human Rights Watch has called for the ban of "social scoring" systems, which it argues threaten to disrupt people's access to Social Security support, compromise their privacy, and profile them in prejudicial ways.

In the European Union, the AI Act imposes stricter oversight on high-risk AI systems, including those that make individual credit and employment decisions. Providers of these systems must register in a database, perform quality and risk management, employ human supervisors, and report incidents to the relevant authorities, among other requirements. In the United States, Colorado has passed a law mandating that AI developers disclose information about "high-risk" AI systems, and publish statements summarizing the systems' capabilities and limitations. New York City, meanwhile, prohibits employers from using automated tools to screen a candidate for employment decisions unless the tool has been subject to a bias audit within the prior year.

Google's updated policy is seen as a significant step towards establishing clearer guidelines for the use of AI in high-risk domains. As the technology continues to evolve and become more pervasive, it is essential for companies, regulators, and advocacy groups to work together to ensure that AI is developed and deployed in a responsible and equitable manner.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.