Forum Discussion
ftarnogol
12 years agoExpert Protege
No way to add focus information to individual pixels
In one of the latest blog posts:
http://www.oculusvr.com/blog/vr-sickness-the-rift-and-how-game-developers-can-help/?utm_source=Oculus+VR&utm_campaign=e532f989bc-KS_Update_Steve_Tom5_21_2013&utm_medium=email&utm_term=0_83417788de-e532f989bc-72702989
Tom Forsythe said that there’s currently no practical way to add focus information to individual pixels.
And I thought... (I'm very illiterate on this)... is there a way to do or simulate what the Lytro camera does to overcome this issue?
https://www.lytro.com/camera
http://www.oculusvr.com/blog/vr-sickness-the-rift-and-how-game-developers-can-help/?utm_source=Oculus+VR&utm_campaign=e532f989bc-KS_Update_Steve_Tom5_21_2013&utm_medium=email&utm_term=0_83417788de-e532f989bc-72702989
Tom Forsythe said that there’s currently no practical way to add focus information to individual pixels.
And I thought... (I'm very illiterate on this)... is there a way to do or simulate what the Lytro camera does to overcome this issue?
https://www.lytro.com/camera
28 Replies
- The focus problem is that our brains can tell how far away something is by how we focus on it. The Lytro can refocus after an image is taken, but your eyes have the same focus to see the lytro's screen. No matter what you do, you can't make an out of focus part of a lytro image become focused just using your eyes (squinting, putting on glasses, etc), you need the software to select a new focal distance.
For focus to work realistically on the rift, you'd need to be able to change the physical optics of each pixel.
(Hmm, maybe having motors to adjust the whole lens focus, then eye tracking to get the point on the screen you are looking at and adjust the lenses to match the depth of that point. That could give an interesting approximation) - jhericoAdventurerThe Lytro creates a lightfield, rather than a pixel map. This lets you adjust the depth of field and focus after you've stored the image. There's nothing that would stop you from taking that lightfield and rendering it to the Rift, but similar to the way the Lytro screen only shows one particular state of the light field, so to the Rift will only render one particular focus setting at a time. On the other hand the Rift could enable you to do some interesting things, like adjust the focus settings by moving your head.
- bryankerrExplorerI was wondering...
If you look at the examples on Lytro.com the software is able to give different viewpoints of a captured image. You can do this by clicking and dragging in the sample images. Would it be possible to render two images, one in each extreme horizontal angle, to simulate a stereoscopic image? I don't think the separation will be as wide as the average IPD but it might be enough to give some depth perception. Some of the newer lightfield cameras also do video so if this worked it might be a good way to capture 3d video. - tomfExplorer
"kojack" wrote:
(Hmm, maybe having motors to adjust the whole lens focus, then eye tracking to get the point on the screen you are looking at and adjust the lenses to match the depth of that point. That could give an interesting approximation)
There is a research prototype (not ours, someone else's) that does exactly this. But it relies on very fast eye-tracking and I think it's not very portable right now. Still interesting. - zaloExplorerThe lytro uses a microlens array.
Perhaps a microlens array in front of each pixel could work if you could a use the piezoelectric effect to modulate the focus for each pixel. It wouldn't work on a large lens, but a pixel-size lens might get deformed enough for it to work. - Lupin3rdProtege
"tomf" wrote:
"kojack" wrote:
(Hmm, maybe having motors to adjust the whole lens focus, then eye tracking to get the point on the screen you are looking at and adjust the lenses to match the depth of that point. That could give an interesting approximation)
There is a research prototype (not ours, someone else's) that does exactly this. But it relies on very fast eye-tracking and I think it's not very portable right now. Still interesting.
Isn't that way over the top? I thought, as long as you know where the User is looking at (using Eye tracking) then you render the depth of view in software, (i.e. blur the background or the foreground) - antigravityExplorer
"tomf" wrote:
"kojack" wrote:
(Hmm, maybe having motors to adjust the whole lens focus, then eye tracking to get the point on the screen you are looking at and adjust the lenses to match the depth of that point. That could give an interesting approximation)
There is a research prototype (not ours, someone else's) that does exactly this. But it relies on very fast eye-tracking and I think it's not very portable right now. Still interesting.
That's always been the #1 issue with 3D/VR for me is the focusing.. (You're focused clearly on something right under your nose, but the background in the distance remains that perfectly focused split image)
I actually really dug how the issue was somewhat solved for the first time with these low-res dev-kits because everything always stays somewhat out of focus. Your brain sort of lets you disregard the background as normal blur while focused close and even enhances the effect a bit for you.. I even went as far to think the HD kits might be an experience step backwards once that irregular hyper-focus returns.
The super fast eye-tracking with realtime DOF changes has got to be the ultimate solution, but if someone did a demo where realtime DOF was intelligently based on where your (0,0) HUD target intersected with your visuals? I'd bet you'd nail 80% of the effect and remain somewhat practical. "Hadtstec" wrote:
"tomf" wrote:
"kojack" wrote:
(Hmm, maybe having motors to adjust the whole lens focus, then eye tracking to get the point on the screen you are looking at and adjust the lenses to match the depth of that point. That could give an interesting approximation)
There is a research prototype (not ours, someone else's) that does exactly this. But it relies on very fast eye-tracking and I think it's not very portable right now. Still interesting.
Isn't that way over the top? I thought, as long as you know where the User is looking at (using Eye tracking) then you render the depth of view in software, (i.e. blur the background or the foreground)
Doing a software blur based on eye position isn't enough. Your eyes will be physically focusing at the same distance where ever you look (because the lcd screen isn't moving closer/farther). Your brain will sense one distance based on eye vergence and a different distance based on eye focus.- Anonymous
"tomf" wrote:
"kojack" wrote:
(Hmm, maybe having motors to adjust the whole lens focus, then eye tracking to get the point on the screen you are looking at and adjust the lenses to match the depth of that point. That could give an interesting approximation)
There is a research prototype (not ours, someone else's) that does exactly this. But it relies on very fast eye-tracking and I think it's not very portable right now. Still interesting.
if this was even possible it would be too expensive for sure ... but yeah it would make VR so much more perfect - tomfExplorer
"antigravity" wrote:
The super fast eye-tracking with realtime DOF changes has got to be the ultimate solution, but if someone did a demo where realtime DOF was intelligently based on where your (0,0) HUD target intersected with your visuals? I'd bet you'd nail 80% of the effect and remain somewhat practical.
People have tried this - it's pretty horrible. They've also tried setting the focus to where your mouse/reticule/aim is - that's bad as well. The problem is that your eyes move before either of these sorts of motions, and usually only by a little bit. So what happens is your eyes move, they see blur, try to focus (but of course they can't - it's not that sort of blur), then your aim/view shifts half a second later, and the blur goes away, and now your eyes refocus. It's quite straining on the eyes.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 2 months ago