
Image: MIXED
Der Artikel kann nur mit aktiviertem JavaScript dargestellt werden. Bitte aktiviere JavaScript in deinem Browser und lade die Seite neu.
Why doesn’t the Quest 3 support continuous scene meshing, as other MR headsets do? Meta’s CTO explains the challenges involved.
FACTS
From a user’s perspective, room scanning remains one of the biggest hurdles for mixed reality on the Meta Quest 3 and 3S. Users must scan their environment in advance, which can take several minutes depending on the room’s size and complexity. Furthermore, the resulting 3D scan is static and requires manual updating if furniture is moved. Often, a new scan is required.
This cumbersome process can be off-putting. After a long day at work, users often lack the motivation to scan their room before they can even start playing an MR game.
In his latest AMA on Instagram, Meta CTO Andrew Bosworth, explained the challenges of supporting continuous room scanning on Quest — and assured users that this feature is coming eventually.
Yeah, there’s two different parts to this.
The first one is just compute. It’s expensive to constantly recalculate what’s happening in the environment around you, and we could be using that thermal and compute space for other things, which we think you might value more.
The second one is actually as a model, it’s kind of harder. If I have pre-computed the environment and I hand it off to that application, the application can make a lot of predictions about that environment.
If the environment is constantly changing, that application has to be much more dynamic, because that table could move, that couch could move, and what do you do as a consequence?
The truth is, most of these experiences today are being used in static environments. That will change over time. So I think we will get there, to be clear, I think it will become a continuous process over time.
And we played with versions of this in the past when we had, you know, is someone transgressing the boundary. But it’s just trade-offs both against the compute today and then against the model the developers are using over the long term.
CONTEXT
Perhaps with Meta Quest 4
The Instagram user asked a valid question, since devices like the Apple Vision Pro already have the capability to continuously scan a room.
The key difference lies in the hardware: the Apple Vision Pro uses a LiDAR sensor for scanning, whereas the Meta Quest 3 and 3S rely heavily on computationally intensive computer vision algorithms. Some VR developers are experimenting with their own solutions for continuous room scanning, but these come with significant performance costs.
If the Meta Quest 4 features a more powerful depth sensor and increased processing power, the current manual room scanning process could become a thing of the past—though adding such hardware would likely drive up the device’s price.
Earlier this year, Meta hinted that it is working on improving room scanning. However, no specific timeline has been announced.
Are you interested in VR and AR? Join the conversation on Facebook, Bluesky or X or share your opinion in the comments below. For feedback, topic suggestions, or other ideas, please email us at [email protected].
Note: Links to online stores in articles can be so-called affiliate links. If you buy through this link, MIXED receives a commission from the provider. For you the price does not change.