Samsung has taken the lead with a fresh announcement, introducing a mixed reality (MR) headset built on the new Android XR platform. Codenamed “Project Moohan,” this innovative device is set to make its debut in the consumer market in 2025. I had the opportunity to try out an early version, and here’s what I discovered.
There’s a sense of deja vu with Project Moohan—it’s as if the Quest and Vision Pro inspirations collided to form a new entity. This isn’t mere speculation. One look at the headset reveals a striking resemblance to Vision Pro. From color schemes to button layouts and even the calibration process, it seems to have taken a page out of its competitors’ playbooks.
But let’s address the elephant in the room—this isn’t about pointing fingers or throwing out plagiarism accusations. It’s common for tech companies to adopt and adapt successful ideas, striving for improvement in the process. As long as Project Moohan and Android XR harness the successful elements and sidestep the pitfalls, it’s a win-win for everyone involved, including developers and users.
And from what I can tell, the positives are indeed there.
### Hands-on With Samsung’s Project Moohan Android XR Headset
Focusing on the hardware, the Project Moohan headset is impressive, boasting a sleek design akin to Vision Pro’s goggles. Unlike Vision Pro, with its often uncomfortable soft strap, Samsung has opted for a sturdier strap with a tightening dial, resembling the Quest Pro’s ergonomic design. This design, with an open-peripheral view, shines for augmented reality (AR) applications. Additionally, just like Quest Pro, there are snap-on blinders for immersive experiences.
Despite sharing some features with Vision Pro, like button placements, one noticeable absence is an external display for showing the user’s eyes—something I find valuable in Vision Pro, despite mixed reviews. There’s a peculiar disconnect when communicating with someone wearing Project Moohan because you miss that eye contact, which Vision Pro offers with its EyeSight feature.
While Samsung remains silent on the specifics of the headset, still in its prototype phase, we did confirm it runs on the Snapdragon XR2+ Gen 2 processor—a more powerful upgrade over what’s found in the Quest 3 and Quest 3S.
During my hands-on, some key features stood out. The headset utilizes pancake lenses with automatic IPD adjustment, thanks to integrated eye-tracking. However, the field-of-view appeared somewhat narrower compared to Quest 3 and Vision Pro. This perception might shift with different forehead pad configurations that might enhance the field-of-view by adjusting eye-lens proximity.
For now, though, Project Moohan’s field-of-view is slightly less expansive than Meta’s Quest 3, followed by Vision Pro. There’s also a noticeable brightness fall-off toward the edges of the display, but this could improve with lens adjustments.
Samsung assures us that Project Moohan will come with its own controllers, though I didn’t get the chance to try them. The decision on whether these controllers will be bundled with the headset or sold separately is still up in the air.
In my trial, the input was limited to hand-tracking and eye-tracking—a familiar blend of Horizon OS and VisionOS features. You can select items using raycast cursors or the eye-and-pinch method. The inclusion of downward-facing cameras makes it easy to register pinches even when your hands are resting on your lap.
When I finally donned the headset, I immediately noticed the clarity of my hands in the display. The passthrough cameras deliver a sharper image than the Quest 3, with less motion blur than Vision Pro, although this was under optimal lighting conditions. The cameras appeared to be focused at arm’s length, highlighting nearby objects with greater clarity than those further away.
### Inside Android XR
Now, let’s dive into Android XR. It feels like a clever fusion of Horizon OS and VisionOS right from the start. The home screen is strikingly similar to Vision Pro’s, with app icons displayed on a transparent backdrop. Select one via the look-and-pinch gesture, and it opens a floating app panel. Even gesture controls, like opening the home screen, mimic Vision Pro by using a look-at-your-palm-and-pinch method.
The system windows in Android XR are more reminiscent of Horizon OS than VisionOS, thanks to their opaque backgrounds and flexibility to move anywhere on an invisible frame surrounding each panel.
Besides flat apps, Android XR also embraces the immersive world. I had the chance to experience a VR version of Google Maps, reminiscent of Google Earth VR. It allows you to explore global locations, view detailed 3D models of major cities, and delve into Street View imagery. Interestingly, they’ve introduced volumetric captures of internal spaces, providing a new dimension to the virtual experience.
While Street View offers monoscopic 360 images, the volumetric captures are rendered in real-time and thoroughly explorable. Google described it as a Gaussian splat solution but didn’t clarify whether the captures are based on existing or new photography. Though not as sharp as photogrammetry, these captures are set to improve and currently run on-device, not streamed.
Google Photos has been optimized for Android XR, bringing the ability to convert any existing 2D photo or video from your library into 3D. The short demos I saw demonstrated high-quality conversions, comparable to Vision Pro’s capabilities.
Another app benefiting from Android XR’s innovations is YouTube. Besides watching regular content on a large curved display, it gives access to a robust library of 180, 360, and 3D videos. Google even showed me a video originally shot in 2D and then converted to 3D for the headset. The quality was quite impressive, though specifics on whether this process requires creator consent or is automatic were not given.
### The Stand-out Advantage (for now)
Hardware and software-wise, Android XR and Project Moohan might seem like Google’s take on existing market options, but they excel in one prominent area: conversational AI.
Enter Gemini, Google’s AI agent. Specifically the ‘Project Astra’ version, it’s designed to be responsive to your voice and vision, blending input from both physical and virtual environments. This integration makes it feel more intuitive and engaging than current AI offerings in other headsets.
Even though Vision Pro runs Siri and Quest features a Meta AI agent with somewhat similar capabilities, neither measure up to Gemini’s seamless operation. While Siri primarily handles single-task commands and Meta’s AI struggles with virtual awareness, Gemini captures a low-framerate video feed of both realms, navigating interactions without awkward pauses.
A standout feature of Gemini is its memory, reportedly retaining up to 10 minutes of conversational context and key details. This allows it to recall past interactions more adeptly.
During a demo, I put Gemini through its paces in a room packed with objects, testing its prowess with tricky questions. The AI held its own, even when I challenged it to translate signs in a variety of languages, demonstrating a nuanced understanding and impressively accurate responses.
Gemini can control the headset as well. When prompted with, “take me to the Eiffel Tower,” it quickly pulled up a 3D immersive map. It can also fetch relevant YouTube videos, adding layers of content-driven interactivity to your virtual experience.
Beyond mere AI assistance, Gemini on Android XR hints at broader potential. Whether it’s sending messages, composing emails, or setting reminders, it could well become an indispensable tool in spatial computing. While it’s the most promising AI agent for headsets right now, with parallels to Meta’s Ray-Ban smartglasses, Apple’s and Meta’s ongoing developments might soon rival these capabilities.
Gemini enriches the headset’s utility for spatial productivity, yet it seems its greatest promise lies in more compact, day-to-day smartglasses, which I’ve also explored… though that’s a story for another day.