Apple has confirmed that its generative AI platform, Apple Intelligence, will be integrated into the Vision Pro extended reality headset as part of the upcoming VisionOS 2.4 update. The beta version is currently available for developers, with a public release slated for April. This move marks a significant milestone in Apple's efforts to position the Vision Pro as a "spatial computing" device, blurring the lines between desktop computing and extended reality.
The initial set of Apple Intelligence features on Vision Pro will focus on generating text and images, with tools like Rewrite, Proofread, and Summarize designed to enhance on-device workflow. While these features have previously been rolled out on iOS, macOS, and iPadOS, their integration into the Vision Pro ecosystem is expected to have a profound impact on user experience.
One of the primary pain points on the Vision Pro has been text composition, which currently requires users to look at a letter and pinch two fingers together to select. Voice dictation has been a partial solution, and the recent AI-powered Siri upgrade bodes well for the smart assistant's future on the headset. Apple is banking on the combination of voice dictation and generative AI writing tools to deliver a smoother experience and encourage more users to incorporate the Vision Pro into their existing workflows.
Image Playground is another key feature arriving with the VisionOS 2.4 update, bringing image generation to the wearable display. Integrated directly into the VisionOS Photos app, users can create images through verbal prompts, further expanding the headset's capabilities.
In addition to the VisionOS 2.4 update, Apple has also launched a Vision Pro iPhone app, currently in beta with iOS 18.4. The app enables users to browse VisionOS content like TV shows and movies, which can then be transferred onto the headset. This feature appears to be a response to the limitations of wearing the headset, both in terms of personal comfort and battery life.
The new iPhone app also allows users to manage guest accounts, with the Vision Pro prompting its owner when someone is attempting to sign in as a guest. A streaming image of the guest's in-headset view is accessible through the app, further enhancing the overall user experience.
As Apple continues to push the boundaries of extended reality, the integration of Apple Intelligence into the Vision Pro marks a significant step forward. With its focus on enhancing workflow and image generation capabilities, the company is poised to further establish the Vision Pro as a powerful tool for professionals and consumers alike.
As the tech industry continues to evolve, the implications of Apple's move will be closely watched. With the Vision Pro positioned as a "spatial computing" device, the possibilities for future innovation and growth are vast. One thing is certain – Apple's commitment to pushing the boundaries of extended reality will have a lasting impact on the industry.