Forum Discussion
MaXvanHeLL
10 years agoHonored Guest
DirectX Stereoscopy Transformation Projection
hey there!
Basically I have set up a simple "normal" Monocular Renderengine, which worked fine. I wanted to support Oculus Rift and therefore I have rendered the scene twice, once with camera translation_x of ~-0.032 and once with camera translation_x of ~ +0.032 to get a IPD of 64mm (using still my old monocular projection matrix).
My problem now is, when I render a simple object in space at 0 for example, translate the camera to the left once and once to the right a little -> I don't get a 3D view. Okay, maybe the IPD needs to get adjusted, but here comes the part which I am completely confused about: When I am moving the virtual camera now backwards AWAY from the 3D Model, the Model gets even more separated on increasing camera distance. Until now, I am not completely sure why the objects seems to more seperated from each other depending on camera distance, that should not be the case.
That's why I have research alot about stereosocpy and I found out about Stereoscopic Transformation Projection like this: http://paulbourke.net/exhibition/vpac/theory2.gif
Furhter I found this: http://www.nvidia.com/content/GTC-2010/ ... TC2010.pdf which mentions that the needed Stereo Projection Matrix is the same as the Monocular Projection matrix, just shifted along x-axis by computing Matrix._13 and Matrix._14 cells. Tried all this, didn't worked (could not see any difference).
I have also setup 2 posts on StackOverflow, the problem drives me crazy I am even not sure what causes this effect and how to prevent it.
[1]: http://stackoverflow.com/questions/3129 ... perception
[2]: http://stackoverflow.com/questions/3125 ... culus-rift
I really hope somebody can help me here, I am quite stuck since a while.. thanks! ;)
Basically I have set up a simple "normal" Monocular Renderengine, which worked fine. I wanted to support Oculus Rift and therefore I have rendered the scene twice, once with camera translation_x of ~-0.032 and once with camera translation_x of ~ +0.032 to get a IPD of 64mm (using still my old monocular projection matrix).
My problem now is, when I render a simple object in space at 0 for example, translate the camera to the left once and once to the right a little -> I don't get a 3D view. Okay, maybe the IPD needs to get adjusted, but here comes the part which I am completely confused about: When I am moving the virtual camera now backwards AWAY from the 3D Model, the Model gets even more separated on increasing camera distance. Until now, I am not completely sure why the objects seems to more seperated from each other depending on camera distance, that should not be the case.
That's why I have research alot about stereosocpy and I found out about Stereoscopic Transformation Projection like this: http://paulbourke.net/exhibition/vpac/theory2.gif
Furhter I found this: http://www.nvidia.com/content/GTC-2010/ ... TC2010.pdf which mentions that the needed Stereo Projection Matrix is the same as the Monocular Projection matrix, just shifted along x-axis by computing Matrix._13 and Matrix._14 cells. Tried all this, didn't worked (could not see any difference).
I have also setup 2 posts on StackOverflow, the problem drives me crazy I am even not sure what causes this effect and how to prevent it.
[1]: http://stackoverflow.com/questions/3129 ... perception
[2]: http://stackoverflow.com/questions/3125 ... culus-rift
I really hope somebody can help me here, I am quite stuck since a while.. thanks! ;)
4 Replies
- cyberealityGrand ChampionCheck how it's done in the OculusRoomTiny sample app included in the SDK.
- MaXvanHeLLHonored Guestyeh I already checked out that sample. The thing is, my engine works actually completely different and all the DirectX related stuff is at it's background. I also used once the same function for creating the projection matrix than in the oculus tiny demo, which uses the EyeDescritption[eye].fov. But I could not see any difference.
Currently I have all Oculus related stuff deactivated, to decrease error-prone variables in my "equation" here. The problem is now completely related to an DirectX problem. It can't be so hard, when I have a working Monocular Engine, already rendering the scene twice into different Rendertextures with slightly translated cameras .. this can't be much whats missing. I actually thought a Stereoscopy Transformation Projection matrix would solve the problem.. - cyberealityGrand ChampionCan you show me a picture, I'm not sure I totally understand what the issue is?
Maybe it's that you are translating in the wrong coordinate space. Meaning you should be moving the camera to the left or right in the camera's local position space, the world x axis may not be parallel to the camera. - MaXvanHeLLHonored Guestfirst of all, many thanks for your answer! I am really stuck since a long with that problem.
I mentioned 2 StackOverflow posts above with a link. There I have already added some pictures. Hopefully you can see what's my issue there. I also pointed there how am I calculating View and (Mono) Projection Matrix, its everything pretty standard.
Many thanks!
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 2 months ago