CES 2025 Kicks Off: Top Tech Giants to Unveil Latest Innovations
Get ready for the biggest tech event of the year as CES 2025 begins, featuring keynotes from Nvidia, Samsung, Toyota, and more, with livestream options available.
Max Carter
In a significant update to its Preparedness Framework, OpenAI has announced that it may "adjust" its safety requirements if a rival AI lab releases a "high-risk" system without comparable safeguards. This change reflects the increasing competitive pressures on commercial AI developers to deploy models quickly, and has sparked concerns about the potential relaxation of safety standards.
The Preparedness Framework is OpenAI's internal framework for deciding whether AI models are safe and what safeguards, if any, are needed during development and release. The update comes amidst accusations that OpenAI has been prioritizing faster releases over safety standards, and failing to deliver timely reports detailing its safety testing. In response, OpenAI claims that it would only make policy adjustments after rigorously confirming that the risk landscape has changed, and that it would still maintain safeguards at a level more protective of users.
The revised framework also places greater emphasis on automated evaluations to speed up product development. While OpenAI has not abandoned human-led testing entirely, it has built a growing suite of automated evaluations designed to keep up with a faster release cadence. However, some reports contradict this, alleging that OpenAI gave testers less than a week for safety checks on an upcoming major model, and that many safety tests are now conducted on earlier versions of models rather than the versions released to the public.
The update also introduces changes to how OpenAI categorizes models according to risk. The company will now focus on whether models meet one of two thresholds: "high" capability or "critical" capability. High-capability models are those that could "amplify existing pathways to severe harm," while critical-capability models are those that "introduce unprecedented new pathways to severe harm." OpenAI has stated that models that reach high capability must have safeguards that sufficiently minimize the associated risk of severe harm before deployment, while models that reach critical capability require safeguards during development.
This is the first update OpenAI has made to the Preparedness Framework since 2023, and it raises important questions about the balance between innovation and safety in the development of AI models. As the AI landscape continues to evolve, it remains to be seen how OpenAI's revised framework will impact the industry, and whether rival labs will follow suit in adjusting their own safety standards.
Industry experts are divided on the implications of OpenAI's update. Some argue that the change is a necessary response to the increasing competitive pressures in the AI development space, while others worry that it could lead to a relaxation of safety standards and potentially catastrophic consequences. As the AI community continues to grapple with the risks and rewards of rapid innovation, one thing is clear: the development of safe and responsible AI models requires ongoing vigilance and a commitment to transparency and accountability.
Get ready for the biggest tech event of the year as CES 2025 begins, featuring keynotes from Nvidia, Samsung, Toyota, and more, with livestream options available.
The Nivenly Foundation launches a security fund to pay for responsibly disclosed vulnerabilities in fediverse apps and services, promoting best practices and protecting users.
7 innovative frameworks – Ray, Dask, Dispy, Pandaral·lel, Ipyparallel, Joblib, and Parsl – are breaking down Python's speed limitations, enabling developers to distribute workloads across multiple cores, machines, or both, and unlock unparalleled processing power.
Copyright © 2024 Starfolk. All rights reserved.