Intel's Battlemage Desktop GPUs Imminent, Despite Discrete Graphics Uncertainty
Leaked specs suggest Intel's Arc B580 and B570 graphics cards will launch on December 12th, offering a potential value leader in the GPU market
Elliot Kim
After nearly seven months of anticipation, OpenAI has finally rolled out the real-time video capabilities for ChatGPT, a feature that was demoed earlier this year. The company announced on Thursday that Advanced Voice Mode, its human-like conversational feature for ChatGPT, is now equipped with vision, allowing users to point their smartphones at objects and receive near-instant responses.
This new capability, available to ChatGPT Plus and Pro subscribers, enables users to interact with ChatGPT in a more immersive and interactive way. For instance, users can ask ChatGPT to explain various settings menus or provide suggestions on a math problem, all through screen sharing. The feature's potential applications are vast, ranging from education to customer support and beyond.
In a recent demo on CNN's 60 Minutes, OpenAI president Greg Brockman showcased the capabilities of Advanced Voice Mode with vision. In the demo, ChatGPT was able to "understand" what Anderson Cooper was drawing on a blackboard, providing feedback on his anatomy skills. While the demo was impressive, it also highlighted the feature's limitations, as ChatGPT made a mistake on a geometry problem, suggesting that it is prone to hallucinating.
The rollout of Advanced Voice Mode with vision has been a long time coming, with multiple delays and setbacks. In April, OpenAI promised that the feature would be available to users "within a few weeks," but it wasn't until early fall that Advanced Voice Mode was released, albeit without the visual analysis component. Since then, the company has focused on bringing the voice-only Advanced Voice Mode experience to additional platforms and users in the EU.
Despite the delays, the release of Advanced Voice Mode with vision marks a significant milestone for OpenAI and the development of conversational AI. As the technology continues to evolve, we can expect to see even more sophisticated and interactive features emerge, further blurring the lines between humans and machines.
As OpenAI continues to refine and improve Advanced Voice Mode with vision, it will be interesting to see how users adapt to this new way of interacting with ChatGPT. With the rollout expected to be completed within the next week, users can expect to see significant improvements in the feature's accuracy and capabilities.
In the broader context, the release of Advanced Voice Mode with vision highlights the rapid progress being made in the field of artificial intelligence. As AI continues to permeate every aspect of our lives, it's essential to consider the implications of these advancements and ensure that they are developed and deployed responsibly.
With Advanced Voice Mode with vision, OpenAI has taken a significant step forward in the development of conversational AI. As we look to the future, it will be exciting to see how this technology evolves and the impact it has on our daily lives.
Leaked specs suggest Intel's Arc B580 and B570 graphics cards will launch on December 12th, offering a potential value leader in the GPU market
Sennheiser launches affordable wireless mic kit for content creators, videographers with flexible features and long battery life.
Apple teases 'week' of Mac announcements, expected to unveil M4-powered MacBook Pros, iMacs, and Mac Minis with AI features and redesigns.
Copyright © 2024 Starfolk. All rights reserved.