Google Pulls AI Assistant Gemini from Main App, Shifts to Standalone iOS App
Google removes Gemini AI assistant from main iOS app, encouraging users to download standalone app, sparking concerns over reduced reach and usage.
Alexis Rowe
Hugging Face, a leading AI development platform, has announced the launch of Inference Providers, a feature designed to simplify the deployment of AI models on third-party cloud infrastructure. This move marks a significant shift in the company's strategy, as it partners with prominent cloud vendors, including SambaNova, Fal, Replicate, and Together AI, to provide developers with more flexibility and choice in running their AI models.
The Inference Providers feature allows developers to seamlessly integrate with the infrastructure of their preferred cloud provider, eliminating the need for manual configuration and management of underlying hardware. This serverless inference capability enables developers to focus on building and deploying AI models, while the cloud providers handle the scaling and resource allocation. With this launch, Hugging Face is expanding its ecosystem, moving beyond its in-house solution for running AI models, and embracing a collaborative approach with other industry players.
According to Hugging Face, its partners have worked closely with the company to build access to their respective data centers, enabling developers to spin up AI models, such as DeepSeek, on SambaNova's servers from a Hugging Face project page in just a few clicks. This streamlined process is expected to accelerate the development and deployment of AI models, making it more accessible to a broader range of developers.
Hugging Face's shift in focus towards collaboration, storage, and model distribution capabilities is a strategic move to position itself as a platform-agnostic AI development hub. By partnering with third-party cloud providers, the company is acknowledging the growing importance of serverless inference and the need for developers to have more control over their AI model deployment.
In terms of pricing, developers using third-party cloud providers through Hugging Face's platform will pay the standard provider API rates, at least initially. However, the company has hinted at potential revenue-sharing agreements with provider partners in the future. All Hugging Face users receive a small quota of credits to put towards inference, with subscribers to Hugging Face Pro, the premium tier, receiving an additional $2 of credits per month.
Hugging Face, founded in 2016 as a chatbot startup, has evolved into one of the largest AI model hosting and development platforms globally. With close to $400 million in capital raised from investors, including Salesforce, Google, Amazon, and Nvidia, the company claims to be profitable. This latest move is expected to further solidify its position in the AI development landscape, as it continues to innovate and expand its offerings.
The launch of Inference Providers marks a significant milestone in Hugging Face's journey, as it opens up new possibilities for AI model deployment and collaboration. As the AI landscape continues to evolve, this move is likely to have far-reaching implications for the industry, enabling developers to build and deploy more sophisticated AI models, and driving innovation in areas such as natural language processing, computer vision, and more.
Google removes Gemini AI assistant from main iOS app, encouraging users to download standalone app, sparking concerns over reduced reach and usage.
Philadelphia's industrial-scale ballot counting process aims to combat election misinformation and ensure a secure vote count.
BestBrokers' report ranks the most secure currencies globally, highlighting innovative security features that make counterfeiting extremely difficult.
Copyright © 2024 Starfolk. All rights reserved.