Basically it is not possible to access the real time passthrough video from Quest 3 in Unity.
Many devs among the internet think it is a crucial feature to be able to process to create incredible mixed reality apps with object recognition using machine learning. Nevertheless, it is not possible to access the real time video captured by quest 3 in unity.
Here is a reddit post made by me where people are requesting this feature.
Please, we need access to real time passthrough video in Unity or other engines.
You are completely right!
They justify the lack of access to passthrough image with "privacy purposes" but it doesn't make any sense. There's no difference between a smartphone app using the camera and a quest 3 app using the passthrough camera.
They can always ask the user for allowance to use the camera anyways.
"Note: Currently, Quest 3 can only scan Wi-Fi QR codes from the Meta Quest mobile app. Attempts to scan other QR codes will not work at this time."
It can scan... but they wont allow it for now! hopefully this will change in an update
Rather than simply state "I agree", let me just restate the question I posed to /r/oculus so I can share why I think this feature is so important.
I'm looking to build an app as part of a Ph.D. research project in studying Human Visual Processing - and to do this I've written a Unity Shader which performs a DVS filter (Dynamic Visual Sensor, or Event Based Camera Sensor) effect. I'm very familiar with Unity itself, but I've never found a way to actually apply this shader to real-time vision on an MR headset.
This would essentially be like applying a shader to your own vision in MR. I've obtained a Quest 3 Headset and the passthrough is sufficient enough to actually make this project move, unfortunately there seems to be no way to operate on frames at the pixels and image level in order to actually apply an image filter. I'm wondering if anyone has any other creative ideas on how to implement this.
Regardless of the research, you can imagine a simpler scenario where you have a shader that turns an image to Black and White - so now my question is how can I have the App itself simply display the Front Facing Camera to the user while applying this Black and White shader.
As for computer vision research, access to the image arrays themselves is the only true place to start. I'm fully on-board with notifying users of the cases in which their frames are being accessed, whether it's through a constantly visible icon or not.