Google is set to expand access to its Gemini chatbot to children under the age of 13, but only for those with parent-managed Google accounts, according to a report by The New York Times. This move marks a significant development in the tech giant's efforts to tap into the younger demographic, amidst the intensifying AI race.
The Gemini chatbot will be available to kids whose parents use Family Link, a Google service that enables families to opt into various Google services for their child. A Google spokesperson assured that Gemini has specific guardrails in place for younger users, and that the company won't use that data to train its AI. This move is seen as a strategic attempt by Google to capture a larger share of the younger audience, which is increasingly becoming a key battleground in the AI landscape.
The development raises important questions about the regulation of AI in education and the protection of children's data. The UN Educational, Scientific and Cultural Organization (UNESCO) has already sounded the alarm, calling for governments to regulate the use of generative AI in education, including implementing age limits for users and guardrails on data protection and user privacy. The organization's concerns are rooted in the potential risks associated with AI, including the perpetuation of biases, the spread of misinformation, and the exploitation of children's data.
Despite these concerns, chatbot makers are racing to capture younger audiences, often with little regard for the potential consequences. The imperfections and potential harms of chatbots are well-documented, yet companies are pushing forward with their AI-powered products, often prioritizing market share over user safety. Google's decision to open Gemini to kids under 13 may be seen as a calculated risk, but it also underscores the need for stricter regulations and more robust safeguards to protect children's data and well-being.
The implications of Google's move extend beyond the company itself, as it sets a precedent for other tech giants to follow suit. As the AI race continues to heat up, it is essential that policymakers, regulators, and industry leaders work together to establish clear guidelines and safeguards for the development and deployment of AI-powered products, particularly those targeting children. Only through a collective effort can we ensure that AI is harnessed for the greater good, rather than being exploited for commercial gain.
In conclusion, Google's decision to open Gemini to kids under 13 marks a significant development in the AI landscape. While the company's efforts to provide a safe and controlled environment for younger users are laudable, they also highlight the need for more robust regulations and safeguards to protect children's data and well-being. As the AI race continues to intensify, it is crucial that we prioritize user safety and privacy, rather than commercial interests.