Severe Passthrough Distortion After Casting – Tiny Controllers, Compressed Floor
Hi everyone, I’m experiencing a persistent passthrough distortion issue on my Quest 3 that started immediately after casting. Here’s a summary of what’s happening: Symptoms: Controllers and hands appear tiny in passthrough. Floor grid looks squished/compressed and sits at the wrong height. Cannot properly set floor level—boundary calibration seems broken. Issue is only present in passthrough or mixed reality. Everything looks normal in VR games (no tracking issues). Troubleshooting I’ve already done: Cleared Guardian boundary history. Fully rebooted the headset (multiple times).Re-set Guardian in a clean, well-lit area. Let the headset idle in passthrough to force recalibration. Factory reset via both the settings menu and bootloader. Attempted firmware update (was already up to date). Verified the issue persists without casting. Confirmed tracking and controller alignment are perfect in VR.76Views0likes3CommentsHow can I get a depth map(or point cloud) just from my quest 3's depth sensor???
I know there is a depth sensor on the quest 3 and I want to use its depth data for further image research, but my MQDH only supports recording video from the binocular camera. 😞 I've checked the documentation for the Depth API, and it seems that the depth information has been encapsulated many times, and I wonder if there is a way to get the depth map or point cloud image directly, thank you very much!!🙏18KViews1like36CommentsNative Mesh and Depth API
Hi There, Yesterday Meta released two new APIs in the form of Unity and Unreal plugins for accessing depth map and scene mesh data from the Quest 3, however yet again I feel native developers have been left in the dark. I also know WebXR has these features given the talk at connect this year. However, what about people developing their own engines or games natively for VR/MR on Quest ? I'm not sure if it's possible to access the scene mesh via XR_FB_triangle_mesh in OpenXR ? But certainly it doesn't look like any depth extension is present. Please can we have feature parity for OpenXR, or at least release the APIs for native developers. I'm not sure if any devs at Meta read these posts ? Thank you.4.1KViews1like2CommentsOcclude virtual object by the real world. Meta quest pro Depth sensor.
Hello, it's possible in Unity with Meta Quest Pro occlude the virtual object by the real world. All this thanks to the passtrough? I'm using Scene Model feature of Meta Quest Pro but I'm having an issue. I place a plane over every wall but virtual object are not occluded by the real world, in this way I can't see real world and I lost all the immersivity. It's possible occlude the virtual object and place it behind real object? I want my virtual object are always behind? I want reconstruct the wall of my room virtually All this with passtrough active.1KViews3likes0CommentsExternal Depth sensor - 3d meshes
I am doing a project where I would like to mount a depth sensor such as a 3D depth camera, sonar or Lidar onto the HMD, to detect obstacles in real time and then represent them as point cloud data or 3D meshes onto the display. This way the wearer can navigate more safely. It is important that I can scan and represent the objects as meshes dynamically in real time and not statically scanned beforehand. Then I would have meshes change colour based on the user's distance from that object. Bear with me as I am clueless on how to start and never used sensors or created an algorithm for the Oculus quest, used unity or anything of the sort. For example would I need an adruino/ rasberry pi to for the depth data or can it be somehow handled by the Quest 2 and using Unity (or unreal idk which would be best to use)? Additionally any recommendations for a cheap depth sensor less than €100? I need it to preferably have a FOV of 90 degrees or better. Remember it needs to go onto the participant's head so needs to be light and compact too. At the end I would like to have a result similar to the below image. However this developer used HTC Vive Pro and its built in depth sensing developer feature to create static meshes and not dynamic. (Link to his paper: https://ir.canterbury.ac.nz/handle/10092/16777 ) Any assistance, material, links and videos that can be provided to get started or confirm this is possible would be very much appreciated.1.3KViews0likes0Comments