EU's Interoperability Requirements Open Up iOS to Third-Party Smartwatches
The European Commission's new rules force Apple to allow third-party smartwatches to interact with iOS, ending ecosystem lock-in and giving consumers more choices.
Sophia Steele
The hyperscalers, including AWS, Google, and Microsoft, are reeling from the slower-than-expected growth of generative AI on their platforms, and many are now placing bets on agentic AI. However, a closer look at the architectural approach of agentic AI suggests that it may not drive massive public cloud adoption as expected, instead favoring hybrid approaches and distributed architectures.
Agentic AI enables AI systems to work independently toward goals, make decisions, and manage their resources. The distributed nature of agentic AI systems means they can operate effectively across various infrastructure types, often without needing specialized GPU clusters that cloud providers heavily invest in. This flexibility in deployment options challenges the assumption that agentic AI will drive massive public cloud adoption from the big three hyperscalers.
Unlike traditional AI approaches, agentic AI systems don't require centralized processing power. Instead, they operate more like distributed networks, often running on standard hardware and coordinating across different environments. They're clever about using resources, pulling in specialized small language models when needed, and integrating with external services on demand. The real breakthrough isn't about raw power—it's about creating more intelligent, autonomous systems that can efficiently accomplish tasks.
The big cloud providers emphasize their AI and machine learning capabilities alongside data management and hybrid cloud solutions, whereas agentic AI systems are likely to take a more distributed approach. These systems will integrate with large language models primarily as external services rather than core components. This architectural pattern favors smaller, purpose-built language models and distributed processing over centralized cloud resources.
The diverse landscape of modern IT infrastructure offers ideal platforms for deploying agentic AI systems. Regional providers, sovereign clouds, managed services, colocation facilities, and private clouds can provide more cost-effective and flexible alternatives to major public clouds. This distributed approach aligns perfectly with agentic AI's need for edge computing, local processing, and hybrid architectures.
Organizations can now build scalable AI solutions that leverage the right mix of infrastructure while maintaining control over costs, performance, and data sovereignty. The efficiency of these distributed approaches is evident in how they handle data changes and processing. Modern systems can achieve near-continuous, block-level operations while integrating directly with storage subsystems, avoiding unnecessary I/O operations. This efficiency often makes smaller, specialized providers more attractive than hyperscalers.
Looking ahead, the growth pattern for hyperscalers may not match their expectations. The distributed nature of agentic AI, combined with the need for cost-effective, specialized solutions, suggests that growth will be spread across a broader ecosystem of providers rather than concentrated among the major public cloud platforms. The future may resemble a bridge architecture where various components act as intermediaries between different environments.
AWS, Google Cloud, and Microsoft Azure will certainly play important roles in the agentic AI landscape, but their position may be more as components of broader, more distributed architectures rather than as central, dominant platforms. Organizations implementing agentic AI solutions will likely adopt multiprovider strategies that optimize for specific requirements, costs, and performance needs rather than consolidating with a single hyperscaler.
As enterprises reevaluate their AI strategies, many are reconsidering their reliance on public cloud providers. The rapidly rising costs of running AI workloads on hyperscaler infrastructure have caught businesses off guard, especially when combined with the sticker shock of generative AI systems. For organizations that moved to the cloud a decade ago, expectations of cost savings have been upended, leading many to explore alternatives.
The cost of on-premises infrastructure has fallen significantly, making it a more viable option. With the greater affordability of owned or leased hardware and the availability of modern colocation providers and managed services, enterprises no longer need to manage the daily operations of a data center. This shift gives businesses cost control and flexibility without sacrificing scalability or performance.
The hyperscalers must now rethink their position in the AI ecosystem. As the market for AI infrastructure normalizes, enterprises are looking for the most efficient blend of cloud, colocation, MSP, purpose-built clouds, and on-premises solutions. Organizations prioritize sustainability, sovereignty, and resource efficiency over legacy assumptions about public cloud dominance. For hyperscalers, that means embracing this shift and adapting their offerings to remain relevant during this transition—though some initial pain is inevitable as the industry adjusts.
The European Commission's new rules force Apple to allow third-party smartwatches to interact with iOS, ending ecosystem lock-in and giving consumers more choices.
The Trump Administration's layoffs of National Science Foundation employees with AI expertise have halted key research projects, sparking criticism from AI experts and Nobel laureates.
US chipmaker Retym secures $75M funding to develop programmable DSP chips, aiming to boost data center efficiency and challenge industry giants like Marvell Technology.
Copyright © 2024 Starfolk. All rights reserved.