DIY Phone Repair: A Game-Changer for Sustainability
Tech journalist Allison Johnson shares her DIY phone repair experience, highlighting the importance of user-repairable phones and the right-to-repair movement.
Alexis Rowe
Google has taken another step towards realizing its vision for augmented reality (AR) glasses with multimodal AI capabilities, unveiling a prototype device that can translate text, remember objects, and provide real-time information. The company has announced plans to release the prototype glasses to a select group of users for real-world testing, but a concrete launch timeline and pricing details remain unclear.
The prototype glasses, which run on Google's new Android XR operating system, are the culmination of Project Astra, a DeepMind-led effort to build real-time, multimodal apps and agents with AI. The device's capabilities were showcased in a demo, where it was able to translate posters, remember objects, and allow users to read texts without taking out their phone. According to DeepMind product lead Bibo Xu, the glasses' hands-free and wearable nature make them an ideal platform for Astra.
Google's vision for AR glasses is ambitious, with the company aiming to create a seamless experience that integrates with other Android devices. The Android XR operating system is being opened up to hardware makers and developers, allowing them to build their own glasses, headsets, and experiences. However, despite the progress made, Google remains tight-lipped about the product's launch timeline, pricing, and technical details.
The AR glasses space is increasingly crowded, with companies like Meta and Snap also working on their own prototypes. However, Google's Project Astra gives it an edge over its competitors, with the multimodal AI agent set to be released as an app to beta testers soon. In a hands-on demo, the app was able to process voice and video inputs simultaneously, providing real-time information and summaries of objects and texts.
Project Astra's technology is impressive, using a combination of AI models and real-time processing to understand its surroundings. The app can remember conversations and objects for up to 10 minutes, allowing it to refer back to previous interactions. Google DeepMind has assured that it is not training its models on user data, but the technology's potential applications are vast.
The AR glasses space is still in its infancy, but Google's latest developments suggest that the company is making significant progress towards creating a viable product. While the launch timeline remains unclear, the technology's potential to revolutionize the way we interact with information is undeniable. As the space continues to evolve, it will be interesting to see how Google's competitors respond to its latest move.
In the meantime, the tech industry will be watching closely to see how Google's AR glasses and Project Astra develop. With the company's reputation for innovation and its significant resources, it's likely that we'll see significant progress in the coming months. As the boundaries between technology and reality continue to blur, one thing is clear: the future of AR glasses is looking brighter than ever.
Tech journalist Allison Johnson shares her DIY phone repair experience, highlighting the importance of user-repairable phones and the right-to-repair movement.
Pony.ai plans to expand its robotaxi fleet to 1,000 vehicles in 2025, while US companies like Cruise and Argo AI face setbacks, highlighting China's growing dominance in the autonomous vehicle market.
Anthropic releases Claude 3.5 Haiku, a new AI model that outperforms its predecessor in specific benchmarks, offering improved coding recommendations, data extraction, and content moderation.
Copyright © 2024 Starfolk. All rights reserved.