11-20-2023 02:30 PM
Basically it is not possible to access the real time passthrough video from Quest 3 in Unity.
Many devs among the internet think it is a crucial feature to be able to process to create incredible mixed reality apps with object recognition using machine learning. Nevertheless, it is not possible to access the real time video captured by quest 3 in unity.
Here is a reddit post made by me where people are requesting this feature.
Please, we need access to real time passthrough video in Unity or other engines.
03-14-2024 01:57 AM
I totally agree. It is a mandatory to perform mixed reality application and synchronize virtual object with real ones = Augmented Reality !
03-22-2024 12:34 AM
I absolutely agree
03-26-2024 05:23 AM
Here is an interesting post on the need to open VR camera access to developers and possible ways companies could provide this while still focusing on privacy. In the post it is mentioned that Pico 4 Enterprise headset provides camera access if requested, which is an interesting solution that Meta could use with their headsets - https://skarredghost.com/2024/03/20/camera-access-mixed-reality/
03-26-2024 05:30 AM
Thank you for sharing. Sadly, Pico is pretty much dead, since the pico 5 got cancelled...
03-27-2024 01:24 AM
---- Update on this topic ----
About a week ago, Meta announced a research that Reality Labs have been working for some time now. This research is probably the most interesting one since the launch of the product, cause it focuses on user's space understanding by using AI to be used (what I presume) within an API.
This allow the developers to access a layer of consciousness about the user's sorroundings.
Nevertheless, this does NOT solve the main problem presented at this topic, which is the lack of raw data camera access. In addition to that, Meta is showing zero interest in replying to our needs.
I wouldn't expect that API to be released in the near future. Although it it a step forwards computer vision applications, it doesn't solve our needs.
That's all for today. Thank you everyone for the support and please spread it as much as you can so we can finally have the feature.
Have a nice day.
Ginés.
04-11-2024 12:10 PM
I saw some people posting here about using the camera feed to help users with vision accessibility, here is an interesting device called Lumen Glasses working on this - https://www.dotlumen.com/glasses
I know the Lumen Glasses device is dedicated towards accessibility use and probably has a very good LiDAR or ToF sensor, but, hopefully, this will be another example that shows the importance of having access to raw sensor data and nudge Meta to allow developers access to the camera feed.
04-11-2024 12:36 PM
Thanks a lot for the support! Blind users are going to be the most beneficiaries of this feature. I hope all this comments are taken into consideration. But unfortunately at the moment, there seem to be no response from Meta.
04-11-2024 12:45 PM
Thanks for mentioning the .lumen glasses. I similarly have a need to process the raw passthrough camera feed for blind accessibility in order to synthesize a sonic overlay in the form of "soundscapes", as a general sensory substitution technology for totally blind people, see https://www.seeingwithsound.com/android-glasses.htm Several commercially available smart glasses provide pixel-level access to third-party apps like mine, but not the Meta Quest 2 and 3 headsets, while that would be great for, for instance, training at home. I can install The vOICe APK file on my Quest 2 and it runs, but it just cannot connect to the passthrough camera view and will therefore give up after 30 seconds.
04-11-2024 03:17 PM
i'm trying to research a way to access camera image data, i found some usefull info that might make ways for an unofficial access to camera data. https://www.reddit.com/r/OculusQuest/comments/1c1s2ae/part_2_of_trying_to_find_an_unofficial_way_for...
04-16-2024 05:28 AM
Here is another interesting use for camera access, AR surgeries. Here the surgeon used the Quest 3 but also mentioned his system could be used on other AR headsets like the Apple Vision Pro - https://www.levita.com/blog/levita-magnetics-leads-the-future-of-surgery-with-augmented-reality-ar-h...
Unfortunately, in the press release above they do not go into any detail on what exactly the AR system entails (i.e. are they tracking something in the person they are performing the surgery on or is it, most probably, just a heads-up display with real-time vital signs and body scan information to orient the surgeon). Though this example most likely isn't true AR (which requires the mix of real and virtual objects, lighting, shadows, occlusion; interactivity, physics, object manipulation; and is registered/anchored in a 3D position so you can walk around the virtual object), it does show a small glimpse of how surgeries could be improved.