How can I get a depth map(or point cloud) just from my quest 3's depth sensor???
I know there is a depth sensor on the quest 3 and I want to use its depth data for further image research, but my MQDH only supports recording video from the binocular camera. 😞 I've checked the documentation for the Depth API, and it seems that the depth information has been encapsulated many times, and I wonder if there is a way to get the depth map or point cloud image directly, thank you very much!!🙏18KViews1like36CommentsPrivate app with multiplayer
My team is looking to deploy a private app with multiplayer. Initially we were going to use App Lab and the private keys feature... but I am not exactly sure how that works with the merge of App Lab into Meta's app store. Does App Lab still support private key sharing of an application? If not, our next plan is to try Meta Quest for Business to share the app privately, but I'm hoping we don't have to go that route... Any help is greatly appreciated. I wish Meta was more clear about the changes made to App Lab and how developers are supposed to handle it. Thanks!639Views1like1Commentspatial anchor misalignment
Hello everyone, I recently created a demo of Quest 3 local space anchor using Meta XR SDK. Everything was normal when creating the space, but after turning Quest 3 off the screen for 20 minutes, when I opened the demo again, I found that there was a significant offset in the space. I tried restarting the device but it didn't work. What is this problem? Have any developers encountered it before.555Views0likes0CommentsWhen will we get object and image classification (Computer Vision) for Quest 3 and Quest Pro?
If I wanted to build a Mixed Reality app that can detect when a certain brand logo is visible on a poster, coffee cup coaster, etc... and then allow spatial anchoring relative to that logo there seems to be no way to achieve this today. Compute vision for Quest 3 and Quest Pro developers is limited to a very restricted list of "semantic classification" labels, all room architecture and furniture related objects (ceiling, floor, wall, door fixture, lamp, desk, etc..) full list here: https://developer.oculus.com/documentation/unity/unity-scene-supported-semantic-labels/?fbclid=IwAR3KeVSJCLX977HPLKVDkFM3YqG71p_Blo_eoC7onKkax7wyCafLV0gXTCc This also prohibits any kind of AR/MR training experience where some physical world object (e.g. a bulldozer operations panel) could be detected and spatial anchors augmented relative to specific control panel features to provide dialogs, etc.. all the things you'd expect from industrial AR applications. But this is not just useful for Enterprise/industrial AR, image and object classification is actually a core AR/MR feature required to build compelling experiences. Without this, we just have novelty use cases Looking at the competition, I see Byte Dance is solving this but just allowing camera feed access on their Enterprise Pico 4. The retail version they block it. I doubt Meta will provide camera feed access as they are no longer selling Enterprise specific hardware and this would require a special firmware update to enable. Apple has provided camera access to iOS developers using ARKit for years, but for Vision Pro's ARKit implementation they are restricting camera feed access, however they are still providing image classification/detection and their computer vision/classification models, allowing developers to add their own images for recondition, here's a page from their docs. https://developer.apple.com/documentation/visionos/tracking-images-in-3d-space I am really surprised Quest Pro has been out almost a year and this sort or core AR/MR functionality is completely absent. With Quest 3 now released, more attention will be on AR/MR experiences, and Meta has great in house AI technology, they have computer vision models they could build a closed pipeline where the raw image feed is not accessible, but the classifier model is compiled and through a closed system the detection can happen in Unity3D or Unreal apps. Regardless of how they achieve it, this is so very important to future MR/AR type apps. Without it basically all you can do is very simple spatial anchoring, which may be suitable for novelty games but it's very restrictive and not reflective of the power of MR/AR.15KViews17likes21CommentsDigital Binocular using Quest 3 - Where to start?
Hello everyone I'd like to attach two usb cameras to my quest 3 and turn it into a binocular. Actually I want the passthrough feature, just with better cameras. My software project could be boiled down to a single line of code: quest_display.draw(camera.getframe()); The cameras come with an android usb driver. So I hope, I'll be able to eventually connect them. However, I'm totally lost in the quest development environment. As I don't need any kind of 3d calculation or VR user interaction, I left Unity and Unreal aside, and downloaded Android Studio and Mobile SDK. https://developer.oculus.com/documentation/native/android/book-intro/ However, after days of trying, I find the provided sample code is terribly outdated. Nothing compiles and I'm not even sure, if it is compatible with quest 3, as only quest 2 and pro are ever mentionned. So, please advise me, where to get a simple hello world sample code, that actually runs on the quest 3, and where I can finally place my 'one' line of code 🙂 Thank you so much Gabi839Views0likes0CommentsRendered texture passthrough blur
Hello, I saw we can render passthrough over mesh surfaces using the OVR Passthrough Layer. I want to create a gaussian frosted glass effect, all that is left is to blur the passthrough incoming image. Do you know how i could achieve that? Kind regards, Alexandru Buzdugan798Views0likes0Comments