Google's Gemma 3 AI Models Spark Debate Over Restrictive Licensing Terms
Google's release of Gemma 3 AI models raises concerns over restrictive licensing terms, hindering commercial adoption and sparking debate in the AI community.
Elliot Kim
Microsoft has embarked on a research project to investigate the impact of specific training examples on the text, images, and other types of media generated by AI models. The project, revealed through a job listing on LinkedIn, aims to develop a method to "efficiently and usefully estimate" the influence of particular data, such as photos and books, on AI model outputs.
The initiative comes at a time when AI-powered generators are facing numerous intellectual property lawsuits, with many companies accused of infringing on copyrights by training their models on massive amounts of data from public websites. Microsoft itself is currently facing at least two legal challenges from copyright holders, including a lawsuit from The New York Times and several software developers who claim the company's GitHub Copilot AI coding assistant was unlawfully trained using their protected works.
The research project, led by accomplished technologist and interdisciplinary scientist Jaron Lanier, focuses on "training-time provenance," which involves tracing the most unique and influential contributors to a model's output. Lanier has previously written about the concept of "data dignity," emphasizing the need to connect digital creations with their human creators and ensure fair recognition and compensation.
While Microsoft's project is not the first of its kind, it marks a significant step towards addressing the ongoing debates over fair use doctrine and the lack of transparency in AI model training practices. Companies like Bria, Adobe, and Shutterstock have already implemented programs to compensate data owners or contributors, but these efforts are often opaque and limited in scope.
Microsoft's initiative may also be seen as a response to the growing pressure from regulatory bodies and the courts to establish clearer guidelines for AI model training and data usage. The company's move could be interpreted as an attempt to "ethics wash" its AI business or head off potential regulatory disruptions, but it nonetheless represents a notable shift towards greater accountability and transparency in the AI development community.
It remains to be seen whether Microsoft's project will lead to tangible results or simply serve as a proof of concept. The company's efforts will be closely watched, particularly in light of other AI labs' recent stances on fair use, which have sparked controversy and debate. As the AI landscape continues to evolve, Microsoft's research project may play a significant role in shaping the future of AI development and the ethical considerations that come with it.
Microsoft did not immediately respond to a request for comment on the project's details and implications.
Google's release of Gemma 3 AI models raises concerns over restrictive licensing terms, hindering commercial adoption and sparking debate in the AI community.
Discover the top 5 countries supplying Nigeria's thriving beauty industry with wigs and beards, with China leading the way in a $6.78 million market.
Microsoft is removing its free VPN feature from Microsoft 365 subscriptions, citing low usage, just weeks after raising subscription prices for the first time in 12 years.
Copyright © 2024 Starfolk. All rights reserved.