Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
experience's avatar
experience
Honored Guest
11 years ago

Experimenting with perception

Once upon a time, before I had ever heard of Oculus, I was prototyping my own VR designs. My design was different from the amazing thing the Rift has become, but nonetheless while tinkering I learned all sorts of interesting things that carry over. Thanks to Oculus, many of these lessons have evolved and been improved upon, even becoming common knowledge, others I haven't really seen mention of. I'm fully behind Oculus, I stopped prototyping when I saw what Oculus is doing, and I thought I might occasionally share some of the things I found interesting during my experiments.

I've always been fascinated by display technologies, but even more fascinated by what elements of those technologies tangibly improve the experience. In my field of work (product development and Certification), I receive displays from major vendors before they are released (according to Sony I was the first person to ever receive a 4k TV in the US), on an almost weekly basis. To be honest, as much as I love that, I haven't felt excited about a standard TV/Projector style display in a long time, and the reason is the experience; the improvements to the experience are marginal as the technologies advance, and in many ways it has plateaued. VR, on the other hand, is a revolution on the way we perceive a display, and every improvement is felt by orders of magnitude - to the point that you can develop presence.

I find the psychological and physiological affects of VR/perception to be fascinating. This leads to one interesting thing I noticed while experimenting:
I was measuring how the IPD changes based on the distance of an object being viewed, and attempting to put together data on how this affects each eyes' line of sight. To perform some tests, I put together a series of clear panels in layers, and would put a mark on a central panel to establish what my eyes are focusing on. I would then mark the panels in front of and behind this marker at varying depths (using a dry erase marker) to establish the direction of my line of sight from each eye. This would give me a visual representation of my eyes convergence. One of the first things I noted, is that I could instinctively allow my eyes to converge more or less on the object of focus. My eyes would still be focused to the correct distance, but similar to when viewing cross-eye stereo 3D images, you can allow your eyes' convergence to widen or narrow (side note, this is similar to what happens if you configure your IPD wrong in the Oculus). I also found a tendency for this to happen automatically based upon what type of object (small text or large picture) I'm focusing on. Here's where it gets interesting - I started to notice that manually controlling this ability seems to have a physiological affect on myself. For example, while scanning a room, if I attempt to keep my convergence wide, I seem to have a slight edge at building a mental representation of the room vs if I make no attempt to modify my convergence. The testing was not scientific, and simply involved me scanning rooms/pictures and my wife questioning my memory. In further testing, I noticed that my pulse seemed to actually increase when I would focus on keeping my convergence wider (my wife was pregnant at the time and we had all sorts of health gadgets). It seemed to bring on a somewhat heightened state of awareness. Whether or not the heightened state is caused by psychological or physiological changes, I couldn't say without more testing.
I found plenty of other interesting things I'll share another day. I'm interested to see if anyone else has some odd experiments to share.

5 Replies

  • No actually, unfocusing decreases your awareness and your pulse increases because you're concentrating really hard against your built-in automated functions.
  • "raidho36" wrote:
    No actually, unfocusing decreases your awareness and your pulse increases because you're concentrating really hard against your built-in automated functions.


    What about maintaining focus and modifying convergence? That's what I'm talking about here.
  • The same? You really don't need to do that, what your eye does automatically is the best option.
  • "raidho36" wrote:
    The same? You really don't need to do that, what your eye does automatically is the best option.


    That I don't doubt, we're extremely complex beings, and don't take well to "modification". I'm not implying that you need to do anything here, merely sharing interesting results.
  • I think that the issue may be more of a composition one. Just as in stereo film, you wouldn't want to compose a close foreground object, (I think the Oculus Best Practices guide says 20-25cm for min distance to viewer) and and fully focused object that is 200M in the background.

    When using a completely 3d environment, this is not as much an issue because the Oculus image distortion matrix is being applied to every frame as it's updated, with these 360 camera rigs, the distortion is already "baked in" if you will with the distortion of the camera lens + the stitching software.