cancel
Showing results for 
Search instead for 
Did you mean: 

Eye tracking for better performance

rolyataylor2
Explorer
I wanted to present an idea. It's not the greatest idea but maybe it would work.

Basically as a human I don't have very good peripheral vision, I mean I can make out shapes and movement but all in all I cant read text or see fine details that are not right in the center of my eye's focus.

So my theory is that you may be able to improve performance with a device used to Track a user's eyes in VR including the focus of the users attention in three dimensional space, a programmer can use said tracking to render the center of focus in higher detail and render the rest of the scene at lower less poly-filled graphics.

Do you think this would work? Would the overhead of calculating eye tracking cause a downgrade in performance? Would users notice a lack of polygons on the outer edge of their field of view?
22 REPLIES 22

bp2008
Protege
This is a well known concept. If you search for "foveated rendering" you can read more about it. It is extremely challenging to implement, but in theory it could deliver incredible performance gains which will become very important as display resolution improves in head mounted displays.

Due to the difficulty of implementing such a system, I would not expect to see it in a consumer VR product for many years yet.

VizionVR
Rising Star
Fove has made great advances in this technology.
http://www.getfove.com/

There is a Youtube vid of a young boy playing the piano with a Fove eye tracking HMD.

Not a Rift fanboi. Not a Vive fanboi. I'm a VR fanboi. Get it straight.

rolyataylor2
Explorer
Oh I see, Now I know the term. Reading a microsoft paper on it now. (http://research.microsoft.com/pubs/1766 ... inal15.pdf)

I guess the latency is what kills it. As in that paper the latency has to be < 10ms and even with that there is still noticeable blurring.

With the rate that things are developing I wonder if this will be a thing in a year or two.

Seri
Honored Guest

rolyataylor2
Explorer
What I don't understand is what part of the equation is the hard part? The 3d vision tracking or the rendering?

3d eye tracking sounds like it would be super easy with some pre-use calibration, focus here--->focus here---> blah. Your just measuring the position and distance between the eyes and using basic geometry to place a virtual object in the world? Or am i missing something there?

I guess the challenge is the speed of capture for the eye tracking hardware...

falco
Honored Guest
Now that I´ve seen that it this possible and that Fove is going to release a VR headset with eye tracking I must ask: Why doesn´t Oculus have eye tracking already? I thought that it was because it was either too expensive or difficult to accomplish. It is very disappointing 😞 I hope that they tell us they do have eye tracking at their upcoming event in june, or show off some secret weapon, because today the word is "Disappointing". :cry:

rolyataylor2
Explorer
I think if they at least leave space inside the headset to add eye tracking as a add-on that would be great, An additional $99 dollars would not be to much to ask if it meant more immersive game-play and reduced computer min requirements. I am really iffy about pulling a "first gen iphone" thing with the Oculus rift and having these features in a future releases.

Sidenote/Off topic:
If anyone here has ever played T2 for the SNES it was a rail shooter. I feel like VR is at that level relative to the future. 🙂

bp2008
Protege
I'm sure the eye tracking itself is hard enough. It has to track the eyes extremely accurately and reliably even when the user squints, blinks, winks, crosses eyes, flexes facial muscles, repositions the headset, or takes the headset off and puts it on someone else. And to be a viable consumer product it can't require recalibration all the time.

If you are doing foveated rendering (which is its own difficult engineering challenge), then you have the added trouble of making it happen with less than half the latency (motion-to-photon) than the 20ms that I believe Oculus has been recommending.

Just consider with a 90hz display, you are already looking at having 11.1 milliseconds between frames, and the image has to be rendered and ready to draw for a large amount of that time.

rolyataylor2
Explorer
I would think that foveated rendering would be done the same as current games do with LOD whereas in the distance are less detailed then ones close up except instead of kilometer radius it would be a few meter radius. And the focal sphere would be determined by triangulation of the eye tracking, I'm sure with some extra padding to make sure.

I guess rendering a object with half of it at a low poly and half at a high poly would be difficult. Even so I would think that you could render the object within the focus sphere at full poly and still be able to cut out a lot of peripheral poly count.

Even with very rough (horizontal only) eye tracking that really just tells the direction you are looking could cut half the screen out.

--------------------------------------------------------------------------
| |
| |
| |
| |
--------------------------------------------------------------------------
--------looking kinda here *------------------------------------------