Dynamic Occlusion on Quest 3 is currently only supported by a handful of apps, but it offers improved quality, uses less CPU and GPU, and is a bit easier for developers to implement.
Occlusion, the ability to make virtual objects appear behind real objects, is an important feature for mixed reality headsets. Doing this only on pre-scanned scenery is called static occlusion, but if the system supports scenery changes and object movement it is called dynamic occlusion.
A basic explanation of the general concept of occlusion with Meta.
Quest 3 was released with support for static occlusion but not dynamic occlusion. A few days later, dynamic occlusion was released as an “experimental” feature for developers, meaning it couldn’t be shipped on the Quest Store or in App Lab, and that restriction was lifted in December.
Developers implement dynamic occlusion on a per-app basis using Meta’s Depth API, which provides a coarse per-frame depth map generated by the headset. However, integrating this is a relatively complex process; developers must modify the shaders of every virtual object they want to occlude, far from the ideal scenario of a one-click solution. As such, only a few Quest 3 mixed reality apps currently support dynamic occlusion.
Another issue with dynamic occlusion on the Quest 3 is that the resolution of the depth map is so low that you see empty gaps around the edges of objects and don’t get details like the spaces between your fingers.
Footage from Meta.
But with v67 of the Meta XR Core SDK, Meta has made slight improvements to the visual quality of the Depth API and significantly optimized its performance: the company claims that GPU usage has been reduced by 80% and CPU usage by 50%, freeing up extra resources for developers.
To make it easier for developers to integrate this feature, v67 also adds support for easily adding occlusion to shaders built with Unity’s Shader Graph tool, and the code in the Depth API has been refactored to make it easier to work with.
I’ve tried the Depth API in v67 and can see that it gives slightly better quality occlusion, although it’s still very rough, but v67 has another trick up its sleeve that’s even more important than the raw quality boost.
UploadVR is experimenting with the Depth API with hand mesh occlusion in their v67 SDK.
The Depth API now has the option to exclude tracked hands from the depth map, masking them instead using a hand tracking mesh. Some developers have long been using hand tracking meshes to occlude just the hands, even on Quest Pro, but in v67 Meta provides an example showing how to do this alongside the Depth API to occlude everything else.
We tested this and found that it significantly improves the quality of hand occlusion, but it introduces visual inconsistencies in the wrist area where the system transitions into occlusion via the depth map.
In comparison, Apple Vision Pro does not generate depth maps, but instead masks hands and arms in the same way that Zoom masks users, so dynamic occlusion is only done for hands and arms. This means that Apple’s headsets offer significantly better quality occlusion for hands and arms, but still show idiosyncrasies such as objects held in the hands appearing behind virtual objects or becoming invisible in VR.
Quest developers can find Unity’s Depth API documentation here and Unreal’s Depth API documentation here.