Forum Discussion
uclatommy
12 years agoHonored Guest
Achieving natural depth of field
Hold a finger close to your face and look at it. We all know that you'll have double vision for distant objects behind your finger, but you should also notice that the background is blurred.
This effect does not occur in the rift because everything is rendered in focus. Sure, you could add a depth of field to your render, but what happens when you actually look past your finger? The image would converge, but it would be blurry since the computer doesn't know what you're trying to focus on.
I think depth of field is an important visual cue for a sense of space and the rift is lacking this.
But how do we achieve it? I'm no expert at optics, but I believe that your perception of blurriness is because the light entering your eyes reflected from objects at different distances is due to the different angles of incidence. So that got me thinking about whether or not it would be possible to manipulate the angles of light being emitted from individual pixels.
The first thought that comes to mind is to use prisms to do this.
Thoughts? Am I just totally ignorant here? Technically impossible?
This effect does not occur in the rift because everything is rendered in focus. Sure, you could add a depth of field to your render, but what happens when you actually look past your finger? The image would converge, but it would be blurry since the computer doesn't know what you're trying to focus on.
I think depth of field is an important visual cue for a sense of space and the rift is lacking this.
But how do we achieve it? I'm no expert at optics, but I believe that your perception of blurriness is because the light entering your eyes reflected from objects at different distances is due to the different angles of incidence. So that got me thinking about whether or not it would be possible to manipulate the angles of light being emitted from individual pixels.
The first thought that comes to mind is to use prisms to do this.
Thoughts? Am I just totally ignorant here? Technically impossible?
27 Replies
- geekmasterProtegeIt has been discussed. The options are not affordable with current technology just yet, but they will be one day.
Electrically variable-focus lenses with high-speed eye tracking are one option. Holographic displays are another option. Other non-holographic light-field methods have also been discussed.
With eye-tracking, you could partially simulate depth of field, but everywhere you look would still be at infinity focus. - uclatommyHonored Guest
"geekmaster" wrote:
With eye-tracking, you could partially simulate depth of field, but everywhere you look would still be at infinity focus.
I guess eye-tracking might be a good stop-gap solution, but to me it doesn't seem like the right way to do it. You can look at something directly in front of you, then look past it into the background without really moving your eyes much. Eye tracking wouldn't catch this. I think you need the different angles of light so that when the lens in your eye changes shape, different things come into focus on the sensory nerves in the back of your eye.
What about having little tiny compartments of fluid in front of each pixel and a curved clear surface on one end, then changing the amount of fluid in each compartment depending on depth of the pixel? - geekmasterProtege
"uclatommy" wrote:
"geekmaster" wrote:
With eye-tracking, you could partially simulate depth of field, but everywhere you look would still be at infinity focus.
I guess eye-tracking might be a good stop-gap solution, but to me it doesn't seem like the right way to do it. You can look at something directly in front of you, then look past it into the background without really moving your eyes much. Eye tracking wouldn't catch this. I think you need the different angles of light so that when the lens in your eye changes shape, different things come into focus on the sensory nerves in the back of your eye.
What about having little tiny compartments of fluid in front of each pixel and a curved clear surface on one end, then changing the amount of fluid in each compartment depending on depth of the pixel?
Light field technology (simulating multi-axis "fly eye" lenses) is really the way to go. Holograms also provide light fields. The nice thing about light fields is that you can refocus your depth of field after-the-fact. Check out the Lytro cameras for an example. There is also a synthetic light field rendering project for the Rift posted at MTBS3D.
It will be awhile before we have true light-field display technology though. - uclatommyHonored GuestJust read up on light field technology. Very cool! I'm glad it's being worked on.
- ZeroWaitStateHonored Guest
"geekmaster" wrote:
"uclatommy" wrote:
"geekmaster" wrote:
With eye-tracking, you could partially simulate depth of field, but everywhere you look would still be at infinity focus.
I guess eye-tracking might be a good stop-gap solution, but to me it doesn't seem like the right way to do it. You can look at something directly in front of you, then look past it into the background without really moving your eyes much. Eye tracking wouldn't catch this. I think you need the different angles of light so that when the lens in your eye changes shape, different things come into focus on the sensory nerves in the back of your eye.
What about having little tiny compartments of fluid in front of each pixel and a curved clear surface on one end, then changing the amount of fluid in each compartment depending on depth of the pixel?
Light field technology (simulating multi-axis "fly eye" lenses) is really the way to go. Holograms also provide light fields. The nice thing about light fields is that you can refocus your depth of field after-the-fact. Check out the Lytro cameras for an example. There is also a synthetic light field rendering project for the Rift posted at MTBS3D.
It will be awhile before we have true light-field display technology though.
Lytro's are amazing,I have played with one at a local tech space. I look forward to how light field tech will progresses. currently the processing required to manipulate the data set is a bit of a deal breaker at present, as GPU density increases in the next 12/18 months this may become less of an issue. - geekmasterProtege
"zerowaitstate" wrote:
"geekmaster" wrote:
... It will be awhile before we have true light-field display technology though.
Lytro's are amazing,I have played with one at a local tech space. I look forward to how light field tech will progresses. currently the processing required to manipulate the data set is a bit of a deal breaker at present, as GPU density increases in the next 12/18 months this may become less of an issue.
Actually, since I posted that, the new Nvidia announcement about their light-field HMD !!! made me do some more research about light-field cameras and displays. There is some amazing DIY info in these posts:
viewtopic.php?f=20&t=2620&p=36049#p35577
viewtopic.php?f=20&t=2620&p=36049#p35589
It seems that light-field photos and light-field displays are just a grid of tiny lenses over a grid of tiny pictures, just like a fly's eye. Not very complex at all... - aeroHonored GuestI'm also really excited to see where light field technology can go, posted about it back in june (https://developer.oculusvr.com/forums/viewtopic.php?f=33&t=1942) when I saw the SIGGRAPH emerging technologies preview video, it looks like a really promising technology.
- geekmasterProtege
It seems that a lens barrel extension can achieve 500x plenoptic resolution increase, according to that video.
That makes me curious if such an adjustment can give a big perceived resolution boost for a plenoptic HMD too. Although having the lenses near the eyes is probably more important, if a choice needs to be made.
Here is a link to the document at the end of the above video:
http://www.tgeorgiev.net/FullResolution.pdf - sftrabbitHonored GuestWhat we really need is the ability to affect the incidence of light from each pixel. Unfortunately, the mapping from pixels on the display to positions on the lens is not one-to-one. It's not like we can just deform the lens at certain points to affect the incidence of light coming from certain pixels. This is because the light from a pixel is emitted in all directions and passes through the lens in all places. The angles of those rays of light as they enter the eye are what determines the focus for that pixel. At the moment, the lens causes all rays of light from a single pixel to enter the eye in parallel, so the eye needs to focus at infinity. If you somehow changed part of the lens, it would affect some of the light from all pixels.
The next best thing we could do is have a lens with variable focal length coupled with eye tracking, but this wouldn't really solve the problem. It might prevent eye fatigue, because the focal length of the lens can be changed depending what you are looking at, so your eye can focus naturally. However, everything on the display would be in focus once you have focused on the object. You could potentially simulate the out of focus blur in software, but I imagine that this solution wouldn't work very well. There would likely be an uncomfortable delay between looking at a new object and focusing on it, where the render and focal length are adjusting and your eye isn't quite sure where to focus.
So I'm fairly certain that lenses won't be the solution here - perhaps, as others have mentioned, light field displays will save the day. - EntroperHonored GuestI honestly prefer not to have depth of field simulated. You get better visual acuity and less eye strain when your eyes can just focus at infinity the entire time, no matter what you're looking at.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 2 years ago
- 11 months ago
- 6 months ago
- 4 years agoAnonymous