cancel
Showing results for 
Search instead for 
Did you mean: 

Eye tracking for better performance

rolyataylor2
Explorer
I wanted to present an idea. It's not the greatest idea but maybe it would work.

Basically as a human I don't have very good peripheral vision, I mean I can make out shapes and movement but all in all I cant read text or see fine details that are not right in the center of my eye's focus.

So my theory is that you may be able to improve performance with a device used to Track a user's eyes in VR including the focus of the users attention in three dimensional space, a programmer can use said tracking to render the center of focus in higher detail and render the rest of the scene at lower less poly-filled graphics.

Do you think this would work? Would the overhead of calculating eye tracking cause a downgrade in performance? Would users notice a lack of polygons on the outer edge of their field of view?
22 REPLIES 22

fastdancer
Honored Guest
Everybody's saying there's no chance of a wireless CV1 because on the bandwidth needed. Now, could the method discussed in this thread (if pushed hard, but not so the periferal areas become ridiculously blurry) be able to reduce the needed bandwidth enough for... wireless?

On a similar note, how about treating the smaller "focus" area and the larger ""periferal" area a bit differently, e.g. give the focus graphic data priority over the peripheral data. Maybe the experience/nausea depends primarily on latency within the focus area? I guess you could see it as having a higher fps for some parts of the screen. Now I realise that this may take years to perfect, and will require a whole different approach when it comes to rendering/displaying but in principle, does it make any sense?

reptilexcq
Honored Guest
So FOV has eye tracking and VIVE has Lighthouse. Well, if Oculus have both features, it will obliterate both devices, won't it? C'mon, Oculus...let's bring on the big toys.

rolyataylor2
Explorer
"fastdancer" wrote:
I guess you could see it as having a higher fps for some parts of the screen


The problem may be that the cells in a persons eyes that are used in peripheral vision can capture at a higher rate then the cones in the center of the eyes so It may be a dead end.

"reptilexcq" wrote:
So FOV has eye tracking and VIVE has Lighthouse. Well, if Oculus have both features, it will obliterate both devices, won't it? C'mon, Oculus...let's bring on the big toys.


I agree, I'm sure Oculus is thinking about all aspects including this technology. Another benefit of eye tracking is really novelty but in a Online environment eye movement is important for interacting and immersion.

g4c
Explorer
Eye tracking is fairly trivial, and as mentioned, it would be great for avatars eyeball movement in social interaction environments. A DIY hacker could cobble it up fairly easily.

Foveated rendering with eye tracking is hard because the eye is able to move so fast. I would think you need the "motion to photon" time down to 1ms or less (just guessing here).

Also predictive filters will hardly help because when you look at eye track trajectories; they contain very high frequency components.

Foveated rendering will give tremendous bandwidth savings though, very few realise just how low res our peripheral vision is.
Android VR Developer. https://twitter.com/SiliconDroid

MrMonkeybat
Explorer
The Microsoft Research foveated rendering demonstration used a 120hz eye tracker and a 120hz screen at these refresh rates not one could tell foveated rendering from full resolution but there earlier attemp with 60hz screens and tracking they could. Such high speed eye trackers are still big and expensive last I heard FOVE was still using a 30hz eye tracker.

They said they had the 1080p monitor at 60degrees assuming diagonal that would be 30 degrees vertical. So the one fifth peripheral resolution when expanded to 100 degrees vertical would be 648x720 which is still a saving on the optimum 1512x1680 render target for a 1080x1200 screen.

Concentric multiview rendering could also be useful without eye tracking due to the the tendency of rectilinear sampling to over sample the corners while barrel distortion does the opposite. Hence the ideal render target being 1.4 times the screen resolution at 110 degrees.

The problems with wide FOVs:
http://strlen.com/gfxengine/fisheyequake/compare.html
Multiview rendering extension:
https://www.khronos.org/registry/gles/e ... tiview.txt

jetpic
Honored Guest
According to Fove Kickstarter page, they have 120hz eye tracking.

MrMonkeybat
Explorer
"jetpic" wrote:
According to Fove Kickstarter page, they have 120hz eye tracking.

Checks website. Ah OK but they still need to increase the screens refresh rate from 60hz to 120hz. They say (projected 90hz) which I presume is what they want to achieve but I say aim higher.

They should integrate with SteamVR and get that lighthouse tracking also.

rolyataylor2
Explorer
"g4c" wrote:
Foveated rendering with eye tracking is hard because the eye is able to move so fast. I would think you need the "motion to photon" time down to 1ms or less (just guessing here).


Wikipedia: Controlled cortically by the frontal eye fields (FEF), or subcortically by the superior colliculus, [b]saccades[/b] serve as a mechanism for fixation, rapid eye movement, and the fast phase of optokinetic nystagmus.


Peak speed can reach 1000°/s, Saccades to an unexpected stimulus normally take about 200 milliseconds (ms) to initiate, and then last from about 20–200 ms. So the refresh rate should be above 60hz but not necessarily 120hz.

I think if the term Foveated rendering is to disregarded it and to focus on the first step which would be to put a 3d object in the x,y,z position in 3d space. I think that is the most important step to giving game developers a point of reference on what areas of the scene need to be rendered at full poly count and what areas can have a reduced poly count.

I think the term Foveated rendering (in my mind) imply's that the rendering is based off a angular and infinite triangular shape coming out of the persons position (Field-of-view rendering) which I think is the wrong approach.

I think the key is to create a point of focus in 3d space(x,y,z) and any object at a distance away from that be rendered at a low poly count and apply maybe a bloom/blur filter to. Using this approach objects closer and further from the observer but still in the field of view would also be rendered in low poly which would increase performance even more.

MrMonkeybat
Explorer
I disagree your eyes see in a cone. Depth of field effects in the Human eye are actually not that great hold up your hand to the horizon and the blurring between the focal planes is less the angular resolution of many HMDs. And your eyes change focus so quickly the artificial DOF will likely just get in the way as the convergence has to be figured out very accurately while foveation just has to be fast inaccuracy can be compensated for with a wider cone.

rolyataylor2
Explorer
"mrmonkeybat" wrote:
I disagree your eyes see in a cone. Depth of field effects in the Human eye are actually not that great hold up your hand to the horizon and the blurring between the focal planes is less the angular resolution of many HMDs. And your eyes change focus so quickly the artificial DOF will likely just get in the way as the convergence has to be figured out very accurately while foveation just has to be fast inaccuracy can be compensated for with a wider cone.

That is true, focus forward and back would be very quick.

https://en.wikipedia.org/wiki/Accommodation_%28eye%29

I wonder if the focus of one eye is tied to the movement of both eyes when it comes to focusing on a object or if all the focus forward and back is done within the eye as shown in that wiki.