Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
Anonymous's avatar
Anonymous
13 years ago

Idea for Active 3D - parallel vs convergence

In photography and film alike, there are people who choose to align their cameras to capture parallel images and there are people who choose to aim their cameras on a converging focal point. I am the latter because I believe in allowing our eyes to work naturally and in recreating what the eyes see.

Our eyes naturally converge on what we choose to focus on and by converging on a thing the image produced has a slightly different perspective of that thing... and the world.

These points are both beside the fact that by converging on a focal point is the means by which the author directs the viewers attention in their media. Since parallel has no convergence, focus is infinite, so everything is in focus.

How would you then produce a render with depth of field effect when you are in a parallel viewing state.. with out adding to discomfort. Has anyone tested DOF in a demo?

With the Rift built the way it is, it is designed to view the world forcing your eyes to physically remain locked in a parallel viewing state regardless of where the virtual focal point is. Placing strain on the eye, and lying to your brain that your eye is converging though your eyes are parallel and showing a converging perspective of the environment.

I wonder if by removing the divider in the Rift and integrating an active shutter technology would there be any significant improvement to user experience. A plus would be full resolution per eye!.. Though it would add to the cost.

7 Replies

  • The eyes are free to converge on objects as they would in real life - this is where much of the 3D effect comes from. The virtual cameras are parallel because the device you are displaying those images on is a flat plane. But the eyes can look wherever they like on those planes, and vergence works correctly, even though the renderer doesn't know (or care) what the eyes are actually looking at.

    The only thing the Rift forces is the focus - it remains at infinity. This is a physically difficult thing to change. There are experimental HMDs that can change the focus of the image, and even ones that can display multiple focal depths at once. But they're extremely experimental, and all the ones I know of are too big to mount on your head (instead you strap your head to them!).
  • Your camera has flat sensor. But your eyes are not flat.
    So you can't recreate same image for eyes to work naturally anyway.

    You should correct both images captured in parallel or converging.
    You will shift images left/right to recreate convergence for parallel capture.
    Also you will fix geometrical distortion for converging images.

    Actually there will be same result after you will do all of the needed corrections. :)
    Try to get both pairs of some grid and compare them after correction.
  • "spcunha" wrote:
    With the Rift built the way it is, it is designed to view the world forcing your eyes to physically remain locked in a parallel viewing state regardless of where the virtual focal point is. Placing strain on the eye, and lying to your brain that your eye is converging though your eyes are parallel and showing a converging perspective of the environment.


    This isn't right. Your eyes do converge inside the Rift. They converge on the object you want to look at. Yes, the projections for each eye are parallel, but they're supposed to be because your eyes are axis-aligned with the display. On the other hand, for a 3D TV, you would want to converge the projection to match the convergence of the viewer's eyes on the TV.
  • "spcunha" wrote:

    I wonder if by removing the divider in the Rift and integrating an active shutter technology would there be any significant improvement to user experience. A plus would be full resolution per eye!.. Though it would add to the cost.


    I don't think this would give you any sort of improvement with respect to depth-of-field, unless you were doing eye tracking and updating the cameras accordingly, because you are still looking at a flat plane, thus you don't get any real changes in parallax as you look around.

    The other thing to consider is that you'd have to have a display that was ~120Hz. Now, given that latency is a big deal, I wouldn't be surprised if we eventually get 120Hz displays anyway, but you have to ask how you want to use that reduced frame-time: more detail per eye? or reduced end-to-end latency. I guess it would come down to which is breaking the illusion most.
  • my thoughts...
    why not do what the tv`s and picture theaters have done and introduce polarised
    output (2d to 3d conversion) as an option for 3d viewing instead?
    it would be a heck of a lot cheaper and can be done in software instead
    of needing hardware mods?
    That way you have a choice and dont need to invest in ANOTHER set of 3d glasses
    and shutter controllers, and it sure beats the heck out of anaglyph red/blue system.
    :shock:
  • Qosmius's avatar
    Qosmius
    Honored Guest
    for active 3d effects i think you need to have lences that change theyr dioptric strengt all the time so you have different focals points depending on where you look. or have a screen that moves closer and further away i guess