cancel
Showing results for 
Search instead for 
Did you mean: 

Passthrough camera data is not available on Quest 3 developing with Unity

ginestopo
Protege

Basically it is not possible to access the real time passthrough video from Quest 3 in Unity.

Many devs among the internet think it is a crucial feature to be able to process to create incredible mixed reality apps with object recognition using machine learning. Nevertheless, it is not possible to access the real time video captured by quest 3 in unity.

 

Here is a reddit post made by me where people are requesting this feature.

 

Please, we need access to real time passthrough video in Unity or other engines.

92 REPLIES 92

Ivan_aa
Protege

Meta released a C++ Framework For Computer Vision (CV) And Augmented Reality (AR) Applications, which includes support for multiple devices, including the Quest - https://facebookresearch.github.io/ocean/

Unfortunately, for the Quest it seems to require attaching an external USB camera instead of using the onboard ones: https://facebookresearch.github.io/ocean/docs/demoapps/questapps/externalcamera/#features

Hopefully this is at least a step forward in the direction of developers getting camera access on the Quest, which, according to Boz in a recent interview (July, 2024), will depend on when consumers are ok with the idea of the Quest recording/capturing what the headset's cameras can see (https://www.matthewball.co/all/bozinterview2024). I'm hoping this won't take until the new Quest 4 and Quest Pro 2 are rumored to be released, in 2026 and 2027, respectively (https://www.roadtovr.com/meta-quest-4-quest-pro-2-leak-the-information/). 

If I understood correctly, at least for now this allows developers to work with cameras on the Quest without having to have it fed through the computer, all of the processing can be done on the headset itself. If this is the case, it will help developers on the Quest be ready for when AR is natively and fully allowed on the Quest via the onboard cameras. This way, those developing now can get a head start on things and allows researchers and companies to use it for internal projects (those that won't need to be published on the Meta Quest/Horizon store). Hopefully, when Meta someday allows access to its onboard cameras, they will also allow access to the depth sensor and point cloud data, giving developers more access than what a standard external camera can give. It would be interesting if you can currently use an external camera with a depth sensor (like one of the Intel RealSense cameras with an IR projector), but I don't think it is supported in framework right now, or will be anytime soon (but I hope to be wrong).

Having this Ocean Framework be Open Source also allows Meta to keep improving it based on developer feedback and catch up to Apple's and Google's AR frameworks and those from other companies (Vuforia, Wikitude, 8th Wall, etc.). Therefore, when it is officially launched on the Quest they have a version that works well and is feature-rich when compared to competitors.

 

I doubt (as it seems to be AI focused), but maybe we will hear some news about this from Mark at SIGGRAPH, taking place at the end of this month: https://s2024.siggraph.org/program/keynote-presentations/

CJWolff
Honored Guest

This strikes me as odd.  We need access to real time passthrough video.  I mean I can ask my meta raybans "What am I looking at" why can't I with the Quest?

Because Meta wants to control what is done with raw video frames: they do that with Ray-Ban smart glasses on which you cannot install your own computer vision app, only Meta apps can process raw video frames. For that reason I have no developer interest in the Meta Ray-Ban smart glasses.

Interesting loophole, for as long as it lasts, with thanks for Michael Wiesing for making me aware of it:

A developer figured out how to access Quest 3's passthrough cameras
https://www.uploadvr.com/quest-3-raw-camera-access-workaround-found/
"Uniquely, you're casting from the headset to an app inside the same headset, not to a different device."

That's a smart way of getting access to the device's cameras! Thank you very much for sharing!

It is a pitty people have to do this workarounds just to get access to raw image data.

That's definelity an upgrade to the version of ScreenToTexture that have been floating around for a while now, since it stays on the quest. It still doesn't solve the problem, that the digital objects are overlaying the physical ones. A QR-Code is basically covered with an object an is not recognized anymore... But if there is a workaround for that, its better than the other stuff. Still it's embarassing for meta that we have to do this.

I think you can set a passthrough to be overlay on top of everything when you want. This way you can send a analysis to image recognition and then go back to the experience. But yeah... too much hassle...

Unfortunately, if it's Meta's policy to not allow raw camera access within their apps, then it won't be long before they patch this method, especially given that Michael Wiesing is sharing exactly how he did it. That means it's no long-term solution. I certainly wish him best of luck with his phone call to Zuckerberg though🫡

 

In the meantime, my personal research solution has been to build my own XR headset:

Untitled.png20240725_220213.jpg20240725_220048.jpg20240725_215854.jpg20240725_215825.jpg

It's not quite as pretty as a Quest or Vision Pro, but once you have your own frames, you can do whatever you would like with them; that's worth more to me than their OS.

Untitled2.png

metalex201
Protege

Would anyone be interested in creating a group chat where we could figure out a loophole for this? I have researched several parts of the quest OS and I think we could get access to the camera sensor data, well atleast just the passthrough image (AKA the stitched togheter image that is from the cameras, that is visible to the user). I have found some semingly usefull info and i think using elevated privileges from ADB whit the use of shizuku (an app for android that lets apps run code on sort-of system level) I think there is a posibility of developing a community made API that would make camera access possible, but I have been trying for days on end to figure this out but I think this would require a group effort. Since I lack some knowledge in how android as a system works and writting system code for it.

Hi @metalex201. That's funny - I was about to direct you to a separate forum post where a user had identified several of the services associated with the Quest 3's cameras, and also intended to use Shizuku from there, until I realized you were the author of said post.

I'm not familiar with Shizuku, so I'm not sure what type of privileges it can grant. I've done a little bit of searching. Does it modify your headset itself or elevate the permissions associated with individual apps?