Feature 'depth-sensing' is not permitted by permissions policy
Hello, i'm using a Quest Pro with WebXR, it works well, but i cannot find a way to acces depth sensing. When intializing the context "immersive-ar" with requiredFeature "depth-sensing" i get an error 'XRSystem': The session request contains invalid requiredFeatures and could not be fullfilled. Feature 'depth-sensing' is not permitted by permissions policy According the WebXR specs https://www.w3.org/TR/webxr/#feature-descriptor "The depth sensing feature is subject to feature policy and requires "xr-spatial-tracking" policy to be allowed on the requesting document’s origin." Yet, when I access the remote debugger, under "Application" tab > frames > top, I get : Permissions Policy Allowed Features accelerometer, autoplay, (...) , xr-spatial-tracking Is depth sensing available on Quest Pro through WebXR ? if so, what I am doing wrong ? Thank you ! Matt1.1KViews1like1CommentHow to overlay data in any VR application? - (like guardian does)
Hi. I would like to overlay an object, for example, a box, at a specific place on the Quest 2 display that can be rendered to overlay within any VR application and not just for passthrough. This would be my first step to eventually be able to render real-time point cloud data from an external 3d lidar sensor through unity so that the lidar can detect obstacles and render them onto the headset so that the user can navigate through obstacles without leaving the application he/she is using. Basically, I'm trying to create my own guardian system and space sense. Any links and assistance would be appreciated as I can't find anything through google or youtube.1.1KViews0likes0CommentsExternal Depth sensor - 3d meshes
I am doing a project where I would like to mount a depth sensor such as a 3D depth camera, sonar or Lidar onto the HMD, to detect obstacles in real time and then represent them as point cloud data or 3D meshes onto the display. This way the wearer can navigate more safely. It is important that I can scan and represent the objects as meshes dynamically in real time and not statically scanned beforehand. Then I would have meshes change colour based on the user's distance from that object. Bear with me as I am clueless on how to start and never used sensors or created an algorithm for the Oculus quest, used unity or anything of the sort. For example would I need an adruino/ rasberry pi to for the depth data or can it be somehow handled by the Quest 2 and using Unity (or unreal idk which would be best to use)? Additionally any recommendations for a cheap depth sensor less than €100? I need it to preferably have a FOV of 90 degrees or better. Remember it needs to go onto the participant's head so needs to be light and compact too. At the end I would like to have a result similar to the below image. However this developer used HTC Vive Pro and its built in depth sensing developer feature to create static meshes and not dynamic. (Link to his paper: https://ir.canterbury.ac.nz/handle/10092/16777 ) Any assistance, material, links and videos that can be provided to get started or confirm this is possible would be very much appreciated.1.3KViews0likes0Comments