Showing results for 
Search instead for 
Did you mean: 

Can I access Quest 3's front camera?

Honored Guest

I want to make MR app using Computer Vision technology.

But I don't know How can I access my VR's front camera.

I want to get front camera's image to process.

Can I access Quest 3's front camera?



The quick answer is "no". For privacy reasons, you can't directly access the images from the passthrough camera or the depth sensor to use with non-Meta libraries or custom programming. However, with the assumption that you are using Unity - the Meta SDKs offer MR capabilities through the Passthrough API. Basically - you add some components and tell the SDK to render your passthrough behind (or OVER) your VR rendered objects, and Meta does it behind the scenes for you without giving you access to the actual image/video/depth data. Here is a link to the documentation page for the passthrough feature in Unity:

Get Started with Passthrough | Oculus Developers

If you aren't using unity, you should be able to navigate the docs and learn about how to use it in other programming environments. I'm happy to give you more specifics if you still have questions.


To me this is a very important feature request, hope meta gives us access soon.


The privacy argument is not valid.

On android based systems (such as the one used for the Q3), the user has to give his permission for a specific app to use the cameras so this argument flies out of the window.
Now I'd love to know the real reason to why this essential feature is not yet implemented.

I don't disagree. As much as I'd love access to the camera/depth data, I've been programming in Unity for Oculus for almost 10 years, and they haven't given us access yet. They feel strongly about it for some reason, and I've pretty much given up hope. 😉


Can we get a reply from Meta about this? This doesn't any sense to me. I would also like to start working on computer vision applications using the quest 3. That's one reason I bought one...

We have a device that can do spatial tracking. What if I wanted to track against irl objects to power an application? This has many uses.

1. Track specific pieces on a VR tabletop game, showing information/animations on top of them.

2. Ask quest to 'track' an object you are holding and show you where it is afterwards. (universal, 'find this for me' app)


Is there an obvious alternative to these methods? Can we access the display data perhaps during a passthrough and use that for training computer vision models?

Hello @butterfly557 !


The biggest and most supported post regarding this issue is here.

Many people had your same idea and we need even more support in order to Meta hear us.


Thanks and have a nice day!

Hello @ginestopo!


Thank you for pointing me in the right direction! I have replied and 'kudo'ed that post so we can have a louder voice.

@butterfly557 Thank you very much! I hope they hear us! Wish you success! 😀

Perhaps they could link it with the 'record' feature, similar to how you have to grant the browser access to your camera/mic or location if a website wants to use it?
That way people know if it's recording (outer light, inner record light/part of the OS experience), and the data input is only being streamed through that, which can be read directly in as a videostream via the SDK (when record is engaged) but otherwise not available for anything other than passthrough and native environment AR/mapping capabilities.