Forum Discussion
Frog_Jacuzzi
11 years agoHonored Guest
"Has it occured" questions - focus pulling & mirror tricks
I have two "Has this idea occurred?" type questions, which depend on a good understanding of stereo, focus-pulling and the difference between them. So I brought them here even though neither would have 'stereo' in its title without forcing it in.
Augmented reality headsets, starting with... Reality headsets! (I would go to work and drive my car in one)
Searching the forums, I see evidence of people putting cameras on headsets for the beginnings of augmented reality, a good starting point that makes it look a lot closer than Google Glass does. But I hear talk of 'impossible' physics problems, because you can't get the camera to the correct eye position, with the Rift being in the way.
Have tricks with mirrors occurred to anyone experimenting with this? TARDIS-like spatial shenanigans can be achieved for light only, using mirrors. 45 degree mirrors facing outwards towards inwardly facing cameras could surely do the trick. The mirrors would have to be large enough to cover the FOV of each camera, and the necessary size depends on the distance from camera to mirror, which should be the same as the distance from the mirror to your eye.
I'm not sure if I should add a diagram, of if I had everyone from the word 'mirrors' and I'll be told it's been done. Request a diagram and I'll make the effort ;)
Focusing shenanigans
My second "has it occurred?" type question is about focus-pulling. If a 3D engine is capable of focus pulling, perhaps it can be synced with an optical change in the Rift, to change the apparent distance to the screen without changing its angular size? Is there a lens configuration that can allow such changes or am I inventing optics too freely?
Does anyone have anything like it in development? For the time being focusing on whatever is centred in the viewport could add a great deal of realism, without needing eye tracking. It would never be completely correct, and indistinguishable from a hologram, without directly detecting how the eye is focused, but you could get very, very close to it by tracking the direction of the eye -- the (x, y) coordinates that it's looking at. Experienced chameleons and Magic Eye viewers could probably find a way to see that it's not real or holographic, but it wouldn't matter to most people. When you deliberately pull focus incorrectly and blur things, they'll blur differently from reality or a hologram... so don't do it.
An in-game magic eye picture would still work fine. It would even give any viewers the same discrepancy between the way their eyes focus behind a wall in stereo, while the focus-pulling of each eyes is trained on the wall itself.
I ask because...
I have two friends with Rifts, I've been programming 3D graphics in various languages for years, and have finally come to OpenGL. I'm doing things properly this time, making my final engine to end all my engines, and I'll be testing occasionally on the Rift.
So I'm thinking about A. the extent to which I can inform a user of their whereabouts, in 1st person or for instance 2nd person, on an in-game screen showing their webcam feed and B. how useful it is to get focusing to work well.
Focusing can be approached as one of those multi-sampling problems, where you'd have stereo gone mad with 20 thousand tiny cameras instead of just two. Such sampling would crunch processors and produce artifacts very reliably. The integration problems to implement a symbolic solution can easily get frightening with the wrong approach. So naturally, if I think I might have a good approach, I want to try to do it, and I want to know what other technology it can tie in with later.
I know optics are heavy and moving parts are heavy, but I have a strong neck and I really want focus pulling. So... what's happening there then? I have searched already for answers but fruitlessly.
Augmented reality headsets, starting with... Reality headsets! (I would go to work and drive my car in one)
Searching the forums, I see evidence of people putting cameras on headsets for the beginnings of augmented reality, a good starting point that makes it look a lot closer than Google Glass does. But I hear talk of 'impossible' physics problems, because you can't get the camera to the correct eye position, with the Rift being in the way.
Have tricks with mirrors occurred to anyone experimenting with this? TARDIS-like spatial shenanigans can be achieved for light only, using mirrors. 45 degree mirrors facing outwards towards inwardly facing cameras could surely do the trick. The mirrors would have to be large enough to cover the FOV of each camera, and the necessary size depends on the distance from camera to mirror, which should be the same as the distance from the mirror to your eye.
I'm not sure if I should add a diagram, of if I had everyone from the word 'mirrors' and I'll be told it's been done. Request a diagram and I'll make the effort ;)
Focusing shenanigans
My second "has it occurred?" type question is about focus-pulling. If a 3D engine is capable of focus pulling, perhaps it can be synced with an optical change in the Rift, to change the apparent distance to the screen without changing its angular size? Is there a lens configuration that can allow such changes or am I inventing optics too freely?
Does anyone have anything like it in development? For the time being focusing on whatever is centred in the viewport could add a great deal of realism, without needing eye tracking. It would never be completely correct, and indistinguishable from a hologram, without directly detecting how the eye is focused, but you could get very, very close to it by tracking the direction of the eye -- the (x, y) coordinates that it's looking at. Experienced chameleons and Magic Eye viewers could probably find a way to see that it's not real or holographic, but it wouldn't matter to most people. When you deliberately pull focus incorrectly and blur things, they'll blur differently from reality or a hologram... so don't do it.
An in-game magic eye picture would still work fine. It would even give any viewers the same discrepancy between the way their eyes focus behind a wall in stereo, while the focus-pulling of each eyes is trained on the wall itself.
I ask because...
I have two friends with Rifts, I've been programming 3D graphics in various languages for years, and have finally come to OpenGL. I'm doing things properly this time, making my final engine to end all my engines, and I'll be testing occasionally on the Rift.
So I'm thinking about A. the extent to which I can inform a user of their whereabouts, in 1st person or for instance 2nd person, on an in-game screen showing their webcam feed and B. how useful it is to get focusing to work well.
Focusing can be approached as one of those multi-sampling problems, where you'd have stereo gone mad with 20 thousand tiny cameras instead of just two. Such sampling would crunch processors and produce artifacts very reliably. The integration problems to implement a symbolic solution can easily get frightening with the wrong approach. So naturally, if I think I might have a good approach, I want to try to do it, and I want to know what other technology it can tie in with later.
I know optics are heavy and moving parts are heavy, but I have a strong neck and I really want focus pulling. So... what's happening there then? I have searched already for answers but fruitlessly.
No RepliesBe the first to reply
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 8 months ago
- 26 days ago