I wanted to present an idea. It's not the greatest idea but maybe it would work.
Basically as a human I don't have very good peripheral vision, I mean I can make out shapes and movement but all in all I cant read text or see fine details that are not right in the center of my eye's focus.
So my theory is that you may be able to improve performance with a device used to Track a user's eyes in VR including the focus of the users attention in three dimensional space, a programmer can use said tracking to render the center of focus in higher detail and render the rest of the scene at lower less poly-filled graphics.
Do you think this would work? Would the overhead of calculating eye tracking cause a downgrade in performance? Would users notice a lack of polygons on the outer edge of their field of view?
The comments have been so numerous on the FOVE kickstarter page that some of the early responses may not be seen, so I'd like to point out that they stated that they have stated that they will be supporting OpenVR and seem pretty bullish on also supporting LightHouse tracking!
I backed them for one HMD because I'd love to see some foveated rendering magic and playing with eye focus queues should be fun. They did not promise 120Hz but it seems very likely that they will make this goal.
Imagine room-scale VR with Vive controllers, eye-tracking, 2560x1440 resolution, with foveated rendering and simulated variable depth focus!
Disclaimer: I will also be getting a Vive and CV1 too. VR is going to separate me from so much of my money, :cry: 😄
Eye tracking is the best possible feature that you can possibly have on VR. It helps make thing much easier like selecting items on the menu or selecting play, fast forward, rewind while in Oculus Cinema. It makes life easier. It helps make you do things quicker that a human being might not otherwise be able to. In fact, it is so quick that using it as a pointer or aiming as in FPS would be considered cheating and should not be recommended.