Forum Discussion
MartinFnitraM
2 years agoProtege
Quest pro eye tracking vergence
Hi all. I finally got Eye tracking to work on my unity project (after updating my developer account on the phone app).
Now I'm rendering the gaze rays from my eyes and testing eye independence and vergence. In all of my testings the vergence is nearly constant, regardless of whether I'm focusing on an object one inch or many feet away.
However, gaze rays should be nearly parallel when focusing on very far away objects, and almost 90 degrees from each other when focusing on very close objects. That doesn't happen at all. The two rays are correct in that they both point in the direction my eyes were looking, but it seems Meta is not using each eye's data independently to determine the eye object rotation even though that's what the gaze script implies to be doing.
My hypothesis is the algorithm uses something like a cyclops eye gaze calculation and then points both eyes towards a pre-established focal point on the cyclops ray. If that's the case then that's a problem because I'm using eye tracking precisely to discriminate between the user focusing on nearby or far away objects that may be aligned with the cyclops eye ref. I need the vergence angle for that, but it seems to be constant.
7 Replies
- chiaying770691Explorer
Is anyone from Meta support can answer this? I'm interested in this subject too.
- kurauchiHonored Guest
I'm also interested in this. My experience has been similar to what MartinFnitraM described.
- EmTechTravelerHonored Guest
I too am very interested in this point. I would really love an answer here, as I am working on a project that relies on this very feature.
- chiaying770691Explorer
I'm interested in this subject too. Is there anyone from Meta can answer this query to MartinFnitraM ?
- ha5dzsExplorer
Not sure if anyone is reading this in 2025, but my Quest Pro doesn't track vergence at all. So I ended up slerp-ing the two eye orientation quaternions, which reduced the tracking noise a bit.
- SoheilAppearHonored Guest
- ha5dzsExplorer
I am using head space, because I am building stimuli that are static to retina. Anything that gets though the API is abstracted, and they don't give you low-level data, so you can"t process it further. See the white paper here.
This is actually a good thing from a privacy standpoint.
Re coordinate space conversion, I butchered their script so I would export a gaze vector in spherical coordinates with respect to the head. That being said, I don't use it for interaction, I am kind of simulating different visual deficits.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 2 years ago