Showing results for 
Search instead for 
Did you mean: 

Passthrough camera data is not available on Quest 3 developing with Unity


Basically it is not possible to access the real time passthrough video from Quest 3 in Unity.

Many devs among the internet think it is a crucial feature to be able to process to create incredible mixed reality apps with object recognition using machine learning. Nevertheless, it is not possible to access the real time video captured by quest 3 in unity.


Here is a reddit post made by me where people are requesting this feature.


Please, we need access to real time passthrough video in Unity or other engines.


Meta already did it's public statement. Due to privacy, it's not gonna be. Just think about horror in anti AR people says each other soemthing like "Meta's new feature allows developers to spy on our own eyes, shame the devil!', you see? Just because of that, that's not gonna happen in official way. As mine own opinion. It's too risky to do that.

@bektasesref You are right. Nevertheless, a headset using passthrough is not different from your smartphone pointing at your face or backwards each time you use it. IMHO a pop up asking for user permission is more than enough.

I think you're getting something clearly confused. "spying on the eyes", you can already do that, at least with a quest pro. Eye tracking is open to be used. video-Passthrough is using the cameras to the front and even that, as mentioned so many times before is fairly common with phones. Even Meta itself has glasses that ONLY do film the environment, and that only has a small white light.

And they will open this up to the public at some point, probably only through some APIs, but it takes them again ages to do that.


As of now, I think the only immediate solution is to add another camera that you can access. An external webcam, sensor, or (worst case) smartphone mounted to the headset.

I'm definitely considering it; I'm just not sure where to start.

+1 again. Running CV on the camera feed would allow so many cool applications. World-anchoring of AR content on some kind of marker for example


Apple just announced the VisionOS 2.0 at WWDC 2024, which now gives enterprise users/apps raw camera access and other similar features (note that enterprise users must first send Apple a request access to access the new API). For the app to get access to the camera feed features, the user must accept the required access (i.e. camera access), similar to how mobile apps today request further access from user devices.


From Apple's website (😞


Enterprise APIs

New APIs for visionOS grant enhanced sensor access and increased control, so you can create more powerful enterprise solutions and spatial experiences. Access the main camera, spatial barcode and QR code scanning, the Apple Neural Engine, and more."


Video Explaining the new camera access features:


API Documentation:


This is a great solution for industry uses. Maybe this may come to consumer apps in the future as well, but for now, this is already a great way to get started working on AR applications. A lot of this seems like ideas from the HoloLens, Magic Leap, and Vuzix devices, which is great as they have had some good uses in the past. Hopefully this will push Meta to do something similar for their headsets, even if only for the Quest Pro or a similar industry tailored headset, which may still require an industry request.

@Ivan_aa Apple made a solid step forward and Meta has to do the same if they don't want to stay back on the XR race

I hope it is a matter of time that Meta allows raw camera access in every device. Not just quest pro.

For now, the apple thing is only meant to be used in enterprise products and with previous authorization. I would like everyone to have access in the case of Meta.


@Ivan_aa Thanks for the heads up!

I really have to start digging deeper and working with Vision Pro now. We are getting so many requests from industrial customers about XR use cases with Quest 3 but they all rely on having camera access for computer vision tasks. Now we can at least have another option besides HoloLens 2 and Magic Leap 2.

I hope Meta follows Apple on this. Having those controllers and the precision of control that comes with them would make some of those industrial use cases much better than relying on hand tracking.


@ginestopo, I agree, it would be great for Meta to give all developers access to similar sensor data/tools, but companies probably want to be cautious and have some examples of good/safe use of camera access to show the general public that it is safe to/worth giving apps access since there will be a lot of useful/great benefits from it. A lot of consumers are probably worried about privacy, which is valid, but unfortunate for us developers. I think this is one of the many reasons why Apple is first giving access to these features to enterprise developers. I'm guessing that research/university access will probably follow, maybe similar to how HoloLens has a Research Mode (, before they give every developer access to the raw camera feed.

@berglte, you're welcome.

Apple released a video on using object tracking with VisionOS, which I believe does not require an Enterprise API, so all developers should have access. It uses AI/ML for tracking, so developers won't get access to the raw camera feed, it's an interesting solution around the privacy issue of raw camera access specifically for 3D object tracking -

Apple's tracking is relatively simple for now (object needs to be static, rigid, and non-symmetrical), this has been done with other APIs in the past (OpenCV, ARKit, Metaio - bought by Apple, Vuforia, Wikitude, etc.). Hopefully it will improve later to incorporate dynamic and non-rigid/deformable objects.


If anyone is curious on how object detection and tracking works, here are some guides, algorithms, and papers on the subject (you can find many more online as this is a topic that has been studied for many years now, since the 1970s I believe).

1. 3D Object Tracking GitHub -

2. Papers With Code -

3. Microsoft talk from 2012 on how object tracking works for augmented reality -

4. OpenCV and Object Detection and Tracking (there are many more tutorials available):

5. Object Tracking in Computer Vision (2024 Guide) -

6. Intel RealSense (types of tracking) -
7. Microsoft Kinect Object Detection and Recognition -

Hopefully this is another push for Meta to step up and release some developer access to camera feed or use one of their AI tools to track pre-trained objects.

+1 on this, I wrote many posts about how we should be able to access the cameras with permissions, just like you would on a smart-phone. Could have options for it to be only accessible for apps running on device, or option to allow the data to be sent to a PC over Link/AirLink etc. Or even some enterprise/developer mode which requires some manual steps to access this API and removes the concerns of users opting into this by accident etc.

I don't really see any real excuse now, we have the market flooded with iPhones and Android devices which both have native camera API's and I've never heard of a case where someone has developed an app that blackmailed a user or anything similar using their camera data.