Conflicting Information in the Horizon OS SBC (Shader Binary Cache) Documentation?
In the documentation regarding building a shader binary cache per-platform (link) the documentation states: Using this feature, once one user starts the app and manually builds the SBC, all other users with the same device and software (Horizon OS, graphics driver, and app) will be able to avoid the shader generation process by downloading a copy of a pre-computed SBC. However, later on the same page, it states there is an automation in place to launch the apps and perform scripted prewarming logic if requested. The system automatically identifies and processes Oculus OS builds and app versions that require shader cache assets. It generates and uploads these assets to the store backend and automatically installs them during an app install or update. Does this feature support both of those setups? If I am not scripting any custom warmup logic, will shader binary caches still be shared between users with identical setups? IE, if I simply play the release candidate on the target OS version/hardware, will my SBC be automatically uploaded, or are SBCs only distributed when a scripted warmup sequence is present? Few details are provided regarding SBCs from other users being uploaded, so I'm curious if this is an inaccuracy or not. Thanks, excited to see features like this in Horizon OS. Very important for first time user experience.52Views0likes1CommentRenderdock Meta Fork For Mac Bug (VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT)
## Description of Bug This looks like the same underlying bug class as #3571, but on live Android launch/injection instead of when opening a capture. I have a Vulkan + OpenXR Android app running on Meta Quest 3. The app works normally on-device without RenderDoc. The crash happened only when I launched the app through RenderDoc Meta Fork using the Quest replay context and `Launch Application`. The app uses app-owned transient MSAA color/depth attachments for the scene pass. On the normal non-RenderDoc path, the app can create those images with a lazily allocated memory type on the same headset and driver. When the same APK was launched through RenderDoc Meta Fork, the app reached first-frame render setup and then failed while creating the transient MSAA color image. I added app-side Vulkan logging around `vkGetImageMemoryRequirements` and the memory-type selection path, and the injected run showed that the image’s allowed memory types were changed so that the lazily allocated type was no longer permitted. The exact app-side log from the injected run was: `ERROR: VULKAN: Failed to find matching image memory type. lazy=1 transient=1 debug_name=scene_msaa_color_image ... required_flags=0x11 memory_type_bits=0x1 ... available_types=#0:heap=0:flags=0x1:matches_required=0` In other words: - the image requires `VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT | VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT` - under RenderDoc, `memoryTypeBits` becomes `0x1` - only memory type `#0` is allowed - memory type `#0` is only `DEVICE_LOCAL` - the lazily allocated type is excluded - image creation fails - the app traps on first frame I then created a dedicated RenderDoc-specific build of the same app that changes only one thing: it disables the lazy-allocation request for the app-owned transient MSAA scene attachments, while keeping the same RenderDoc launch path, same headset, same driver, same app logic, and same MSAA topology. That RenderDoc-specific build launches successfully under RenderDoc Meta Fork. So the current evidence strongly suggests that RenderDoc changes Vulkan memory-type admissibility for this Android/Adreno workload in a way that excludes the valid lazily allocated memory type that is available and works without RenderDoc. This seems to be the same root-cause family as #3571: https://github.com/baldurk/renderdoc/issues/3571 The difference is that my case happens during live Quest Android launch/injection, not when opening a capture for replay. I cannot share the app publicly right now, but I can provide private logs and, if needed, a private APK or a reduced repro. ## Steps to reproduce 1. Use a debuggable Vulkan Android app on Quest 3 that creates app-owned transient MSAA scene attachments and requests `VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT` for them. 2. Confirm the app launches and renders normally on-device without RenderDoc. 3. Connect Quest 3 to RenderDoc Meta Fork over ADB. 4. Select the normal `Oculus Quest 3` replay context. 5. Use `Launch Application` to launch the APK from RenderDoc Meta Fork. 6. Let the app reach first-frame rendering. 7. The app crashes before normal rendering continues. 8. Check app-side logcat output around the Vulkan image creation path. Observed result: - under RenderDoc launch/injection, the transient MSAA color image gets `memoryTypeBits=0x1` - the lazy memory type is excluded - image creation fails on first frame Expected result: - RenderDoc should not change the image memory requirements in a way that excludes the lazily allocated memory type when the same app and same headset/driver work correctly without RenderDoc Additional confirmation: - a build that disables only the lazy-allocation request for those app-owned transient MSAA scene attachments launches successfully under RenderDoc on the same headset/driver - this workaround is not the desired final app behavior, but it isolates the failure to RenderDoc’s interaction with the lazy memory type requirement I can privately share the exact logcat excerpt and tombstone if helpful. ## Environment * RenderDoc version: RenderDoc Meta Fork v68.15 (forked from v1.41) * Operating System: macOS host, Quest 3 device on Android 14 / Horizon OS * Graphics API: Vulkan (OpenXR app on Android) Additional details: - Device: Meta Quest 3 - GPU: Adreno 740 - Driver seen by the app/RenderDoc path: Adreno 740, driver 512.837 patch 0x6 - App is debuggable/profileable and launches correctly without RenderDoc - Crash happens only when launched through RenderDoc Meta Fork - The RenderDoc-specific workaround build succeeds if the app stops requesting `VK_MEMORY_PROPERTY_LAZILY_ALLOCATED_BIT` for the transient scene MSAA attachments27Views0likes1CommentCannot set Meta Quest Link as active OpenXR runtime normally
Hello! We are developing a PC-VR app in Unity with Meta Quest 3 at our company. We use OpenXR plugin to build the code around OpenXR standard in order to support more platforms in the future. However, the button to set the meta quest link as active runtime for open xr never works on our test PCs. We've tested multiple laptops & PCs in the companies and none of them seem to be work without us manually setting the path in the registry. We've tried starting the Link app as an admin, installed it as an admin etc. One time the registry entry doesn't even get added automatically after installing Meta Quest Link App and we had to install "OpenXR for Windows Mixed Reality" to create it without writing a script to automatically create the group entry ("HKLM\Software\Khronos\OpenXR\1") in the registry. Could anyone help clarify what is up with this case and how we can resolve it? Otherwise the setup process is basically impossible for our potential customers without in-depth technical support... Thank you! References: https://community.khronos.org/t/openxr-directory-not-existing/111184744Views2likes6CommentsCorrect Unity XR rig and scene management for Meta Quest applications
Hello everyone. I'm currently working on a game, and I'm encountering issues as per the title. I've gotten a bit further along with the project, but I'll cut the problem down to a specific and limited space. Let's say we have a Unity project with the XR Core, Interaction Toolkit and Plugin Management packages installed, and we have 3 empty scenes with just a UI button which will take the player to the next scene in a loop, without breaking XR tracking or having any other issues for when the game is running on a Meta Quest 3 as a side-loaded app, using OpenXR as the Plug-In Provider. 1) What are the requirements, assumptions and best practices for the XR Rig in this situation? I've found that if the XR Rig isn't a part of the first scene, the video output will freeze after the splash screen has finished, even if one is supposed to be instantiated by a manager script upon scene load. 2) Once this XR Rig is instantiated, it must be marked as Don't Destroy On Load and maintained for the rest of the app's lifetime, correct? One cannot just have a "scene copy" of the XR rig in all scenes? If there is a requirement to have this XR Rig persist, what is the recommended pattern and approach to managing this rig across scene transitions? 3) Should one be using individual scene loading, or additive loading and subtracting of scenes? Eg, going from the main menu to the gameplay. 4) If this XR Rig must be maintained, how should I handle it being parented and unparented to things such as vehicles, and to ensure that it's separated and maintained upon scene exit? Finally, yes I'm aware of the Meta SDK, but for now and for my own proper understanding of how things should work, I'd like to stick with baseline elements. So please don't tell me to "just use the Camera Rig Building block"!36Views0likes1CommentPlaymode tests in Azure pipeline crashes because MetaSDK
Hello, Adding Meta SDK makes my pipeline to fail because of a crash. If I run the tests opening Unity in the Agent, they run without any issue If I run the script in my personal PC, it also works This is the line "C:/Program Files/Unity/Hub/Editor/2022.3.71f1/Editor/Unity.exe" -batchmode -runTests -testPlatform PlayMode This is the crash log54Views0likes1CommentHow to disable controller's auto-sleep?
Hello, I'm working on a project (PCVR) that continually reads coordinates from Quest Pro controllers (integrated cameras), all works fine in my side. My issue is that the controller automatically turns off (auto-sleep) after few minute if no movements detected, so, reading the controller's coordinates breaks. How to disable controller's auto-sleep? Thank you.60Views1like2CommentsDistanceGrabUseInteractable?
Hi, This is definitely a pretty basic question relating to the Meta Interaction SDK for Unity. I've managed to get a HandGrabUseInteractable linked to a HandGrabinteractable via a SecondaryInteractionFilter. However, I also have a DistanceHandGrabInteractable on that object, linked to its own HandGrabUseInteractable linked to the same delegate as the first. When I grab the object without distance grab, my script calls BeginUse, EndUse, and ComputeUseStrength properly. When I grab with distance, it does not, as far as I can tell - I am working on Mac and the simulator was not working with this scenario at all, so I have to port the APK to my quest each time I want to test. That takes away a bit of my debugging capabilities. I thought perhaps this was an issue with having multiple HandGrabUseInteractables, but when I removed the duplicate and made the object only have DistanceHandGrabInteractable and one HandGrabUseInteractable, it still did not work. I also wondered if perhaps HandGrabUseInteractable only supports the HandGrabInteractable, and not other HandGrabInteractables? But peeking at the package code and reading the SecondaryInteractionFilter docs seemed to suggest either HandGrabInteractable or DistanceHandGrabInteractable should work, so long as all references are piped correctly. What am I doing wrong? How can I link my DistanceHandGrabInteractable to a HandGrabUseInteractable? Will I need to make my own DistanceGrabUseInteractable script, perhaps using the existing HandGrabUseInteractable as a base? Thanks for the help21Views0likes0CommentsRecentering gesture for impaired users
I am building a hand-control-based VR app for users with impaired mobility. I have two challenges related to the pinch-and-hold gesture for recentering. For some users the gesture is exceedingly hard or impossible to perform (false negative). For others, the gesture is sometimes triggered accidentally (false positive). I understand Meta's desire to keep this gesture universal across all third-party apps. Unfortunately, it is not universally viable to all users. I need a solution to this problem or my app will never ship. I am prepared to roll my own recentering system that manipulates the in-game view in response to a hardware "easy button" press. However, to implement this solution I still need to know when an actual pinch-and-hold gesture is performed, so that I can properly recalibrate my own system. Unfortunately I have not found any functioning API or telemetry that might hint that this has happened. I have tried several OpenXR and Meta Core APIs but they all seem to be no-ops on the Quest 3. Can anyone recommend a solution? I'm using Unity 6.3, OpenXR, and the Meta Core SDK. I do not depend on any other Meta SDKs but am willing to add them if they solve this problem.22Views0likes0CommentsAccessibility Feature Request: Conversation Focus Mode for Ray-Ban Meta Display Glasses
Hi everyone! I’m a Ray-Ban Meta display glasses user who is hard of hearing and wears hearing aids daily. I’d love to see a conversation focus mode added that prioritizes voices directly in front of the wearer and reduces background noise. In busy environments, this would make a big difference for hearing-aid users and others who rely on clearer speech in real time. If this type of accessibility feature is ever developed, I would absolutely love the ability to have it added to my glasses and would be happy to provide feedback or participate in any beta or user-testing opportunities. I’ve also submitted this through support channels, but wanted to share here in case the team is gathering feedback.139Views1like0Comments