Forum Discussion
konchok
13 years agoHonored Guest
Artificial Distance
I don't have my rift yet, but I was thinking about the possibility of changing the distance of two objects for each eye to give the effect that something is farther away than it actually is. This could maintain the illusion that clouds are miles above your head without actually having clouds that far away. Is this something that's already done for sky boxes?
12 Replies
- KuraIthysHonored GuestAs far as I know, you can set things at 'infinity' without too much trouble.
The actual distance which corresponds to 'infinity' to a human being isn't something I know of the top of my head, but keep in mind that the difference between the left and right image gets larger the closer it is.
So, for an object at 'infinity', both eyes see exactly the same image.
At the very least therefore, you can set anything to that distance and it will effectively have no depth whatsoever, but as far as your visual system is concerned, it would be as far away from you as you can make out.
I don't know about messing with the distance of closer objects, but 'distant' objects can be closer than they would actually be in reality without any consequence.
For instance, let's say for the sake of argument that 'infinity' is actually equal to 2 km. (again, I don't know the actual distances) Therefore, anything beyond 2 km has no actual depth information to it anymore.
Say you wanted to render the moon. - it's about 384,000 km away on average.
Although you'd have to account for differences in scale, you could render this at 2km (in our example), because beyond that distance, there is no depth information anymore.
Thus, 2km, or 20, or 200,000 makes no difference at all to the 3d effect, and the only difference this would have in practice is how large an object appears to be. - jwilkinsExplorerDisparity between images is a little more complicated than "more distance objects register closer together".
If your eyes converge on a point closer than infinity then the relative position of registration of far away objects on your retina actually inverts! You've crossed the streams :) This means if you are focusing on some treetop then the moon beyond it will not appear at "infinity", but "beyond" it (for lack of a better way to put it).
I'm actually really curious about how to handle this situation with the rift. Without eye tracking it isn't possible to know what the user is converging on, but I do wonder if in certain situations you might be able to simulate convergence by focusing on what you are 99% certain the user is looking at.
Lack of ability to dynamically change convergence is one of my biggest problems with artificial stereo images :( - KuraIthysHonored GuestAh, yes, I wasn't thinking about things that carefully here... A bit of a problem, but you are right.
The vectors alone demonstrate it. (as for instance, some seemingly unrelated examples demonstrate; - if you imagine a spaceship firing two lasers, if they are parallel they never converge, but if they converge at a specific distance, then past that distance they cross over and actually invert relative to where they started from.)
I guess I was thinking of a different issue, or maybe basing it on the fact that if you do casual experiments with focus, you see that a nearby object appears to 'move' if you alternate eyes, while one at a distance does not.
But now that you mention it, if you deliberately try and focus on the nearby object, the distant ones appear to move as well.
As to convergence in the headset itself, that's a different question. It seems to have been built so that the display is focused at infinity. As a result, I suspect you'd want to avoid a mismatch between focus and convergence (a known issue with all stereoscopic 3d systems)
I don't know if you can actually set the image in the headset to converge at a different point than the optics are designed for.
If you can, then you run into the problem of trying to decide what distance to assume for convergence, but if not, you'll be forced to stick to whatever works for the optics in the headset. (which would seem to be infinity, or specifcially, both eyes parallel - which is a case in which my assumption does hold, because there is no crossover point if the eyes remain parallel at all times.) - jwilkinsExplorerI had not tried the experiment with focusing on a near object and then alternatively closing each eye. I just knew this from the mathematics of computing a Z value from disparity, but your experiment makes the idea more real.
I am curious about how the Rift works for this. I don't see exactly how you keep somebody's eyes from crossing. I guess you need to compute your scene so that focusing on an object never requires ones eyes to cross in order to focus on it, but is that generally possible? Your nose obviously requires convergence in order to see it (or does it?).
Just wish my Rift would arrive so I could figure some of this stuff out for myself :) - KuraIthysHonored GuestHeh. From what I've been seeing in the other thread here, there's several issues going on with the headset:
The focal plane for the optics makes it such that to focus on the screen your eyes have to be focused at infinity. (focus and convergence being independent phenomena, but a mismatch is known to be a cause of headaches in stereoscopic 3d.)
Meanwhile, it would appear that mathematically speaking, when you're setting up a camera rig for the rift, the cameras should not have any convergence, but should remain parallel.
This does not mean however that your eyes would not converge when looking at certain parts of the scene, but it does imply that you do not have to account for this in software, because the math of projecting it into the headset already takes care of that.
So, the headset display creates an image focused at infinity, with no convergence of it's own. (apparently possible because each eye sees an independent image, whereas for a 3d tv you have to account for the distance between the viewer and the screen)
Obviously, one implication of all this is that if you focus on anything that's closer than infinity, your eyes will have to focus at infinity to see it, while they might have to converge on something closer...
But that's unavoidable until we get either adaptive focus devices or ones that can project holographic images (which avoid the issue because the light they put out behaves correctly for the virtual distances involved.)
As to the question in the OP, I guess I was wrong about this, and it may be more of a problem to use fake distances than I thought. Although as far as the optics in the headset are concerned, any 3d point has to be closer than the screen. (although the screen appears to be at infinity.) - jwilkinsExplorer
"KuraIthys" wrote:
... (focus and convergence being independent phenomena, but a mismatch is known to be a cause of headaches in stereoscopic 3d.) ...
Yeah, I know. I cringe every time I conflate them. I'll try to be more careful in the future, because I don't want to miscommunicate. - darkfotonHonored GuestHi guys. As I understand there is no accommodation (focusing on different planes for each eye) handling in Oculus Rift. I think it is a last big problem in virtual reality (related to display). I think I know how to solve this problem. In a result we will have really realistic 3D images without lock on static focal plane. Are you interested?
- geekmasterProtege
"darkfoton" wrote:
Hi guys. As I understand there is no accommodation (focusing on different planes for each eye) handling in Oculus Rift. I think it is a last big problem in virtual reality (related to display). I think I know how to solve this problem. In a result we will have really realistic 3D images without lock on static focal plane. Are you interested?
Well, Duh! Why the tease? No useful details? Please, tell us more! Something USEFUL so we can understand what you are claiming that you THINK you know?
This sounds ripe for an active discussion that may grow into something useful. Bring it on! :D - darkfotonHonored GuestOk. I am glad to see you are interested. So, as you may know there are some anisotropic materials which have different refractive index depending on light polarization. If we make a lens of such material, this lens will have different focal distance depending on light polarization. All that remains for us - to control polarization of each pixel of screen. This is not a problem because LCD monitors are based on such control. We need only remove polarizer from one side of liquid-crystal display
So, scheme of VR glasses will be one of the following:
1) OLED display - LCD without a polarizer on exit - anisotropic lens;
2) LCD display - LCD without a polarizers on enter and exit - anisotropic lens;
As you can see the second liquid-crystal display is used to rotate pixel light polarization and control focal plane for each pixel.
I think this is not so easy (and cheep) to get anisotropic crystal and make a lens of it. So, for the first experiments you can use Kerr effect (quadratic electro-optic effect) or the Pockels electro-optic effect to get an anisotropy.
Thank you for your interest and don't kick me much if something wrong, english is not my native language :) - guyshermanHonored GuestI wonder if this challenge also lends itself to a cheap way to do depth of field tricks, like if someone is supposed to wake up groggy etc... Presumably if you caused convergence of the virtual "cameras" or "eyes" then you would force certain things to be in focus and others not (by tricking the user's visual cortex).
As for the issue of where the user's eyes are converging, I don't think you need to worry too much; I did a bit of a literature review on HMDs when I was at university (about 6/7 years ago), and one of the common problems is that your brain knows when you're focusing on a screen simulating distance, rather than actually having to use your vergence reflex (ie changing your focal distance) to see things that are actually in the distance. The result is that your brain, a wonderfully adaptive system, quite quickly turns off your vergence reflex (temporarily...ish), which actually creates a bit of a hazard for driving etc afterwards (it takes longer to come back on that it does to turn off). The net effect: your users eyes stop changing what they're converging on, and delegate that function to the computer.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 7 months ago
- 3 years ago
- 3 years ago