Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
MartinFnitraM's avatar
2 years ago

Quest pro eye tracking vergence

Hi all. I finally got Eye tracking to work on my unity project (after updating my developer account on the phone app).

 

Now I'm rendering the gaze rays from my eyes and testing eye independence and vergence. In all of my testings the vergence is nearly constant, regardless of whether I'm focusing on an object one inch or many feet away. 

However, gaze rays should be nearly parallel when focusing on very far away objects, and almost 90 degrees from each other when focusing on very close objects. That doesn't happen at all. The two rays are correct in that they both point in the direction my eyes were looking, but it seems Meta is not using each eye's data independently to determine the eye object rotation even though that's what the gaze script implies to be doing.

My hypothesis is the algorithm uses something like a cyclops eye gaze calculation and then points both eyes towards a pre-established focal point on the cyclops ray. If that's the case then that's a problem because I'm using eye tracking precisely to discriminate between the user focusing on nearby or far away objects that may be aligned with the cyclops eye ref. I need the vergence angle for that, but it seems to be constant.

7 Replies

  • Is anyone from Meta support can answer this?  I'm interested in this subject too.

  • I too am very interested in this point. I would really love an answer here, as I am working on a project that relies on this very feature.

  • Not sure if anyone is reading this in 2025, but my Quest Pro doesn't track vergence at all. So I ended up slerp-ing the two eye orientation quaternions, which reduced the tracking noise a bit.

  • Are you using the world space or tracking space for your OVR Eye Gaze?

     

    Based on thisapparently, we can get access to the raw data if we use tracking space. I'm still trying to understand it more though.

    • ha5dzs's avatar
      ha5dzs
      Explorer

      I am using head space, because I am building stimuli that are static to retina. Anything that gets though the API is abstracted, and they don't give you low-level data, so you can"t process it further. See the white paper here.

      This is actually a good thing from a privacy standpoint.

      Re coordinate space conversion, I butchered their script so I would export a gaze vector in spherical coordinates with respect to the head. That being said, I don't use it for interaction, I am kind of simulating different visual deficits.