Forum Discussion
ebo60
12 years agoHonored Guest
Rendering Webcams To Oculus
Hello,
I have a bit of code which gets data from two webcams and render the images to the screen side-by-side in c#. I'm interested in buying an Oculus to render one webcam per eye (stereoscopic vision of the real world via 3D webcam, rather than the preference among current Oculus developers of rendering virtual scenes using OpenGL/DirectX pipelines). As I understand it, the Oculus simply renders the left half of the screen to the left eye and the right half to the right eye. Does this mean that I could use any program to render an image in split-screen and it would work well with the Oculus? Will the Oculus perform any necessary transforms to make this look ok?
For example, if I open up MS Paint and paste in two unmodified webcam images side by side, will it look as good as I'd hope?
I have a bit of code which gets data from two webcams and render the images to the screen side-by-side in c#. I'm interested in buying an Oculus to render one webcam per eye (stereoscopic vision of the real world via 3D webcam, rather than the preference among current Oculus developers of rendering virtual scenes using OpenGL/DirectX pipelines). As I understand it, the Oculus simply renders the left half of the screen to the left eye and the right half to the right eye. Does this mean that I could use any program to render an image in split-screen and it would work well with the Oculus? Will the Oculus perform any necessary transforms to make this look ok?
For example, if I open up MS Paint and paste in two unmodified webcam images side by side, will it look as good as I'd hope?
2 Replies
- SubBoyHonored GuestThe short answer is no. There are some extra steps to go through first to get the two images to look right. (Unless you happen to have the right lenses on the webcams, like a fish eye lens, but not.) The good news is there's a lot of tools and resources to get this distortion in, in real time, and is "normally" applied to the 3d scenes after they've been rendered(?), so using them on a video feed should work as easily as anything else. Peruse the forums here and in the WIP/Showcase sections, there's lots of examples of what the Rift needs to look right to the user. Here are a couple topics that might be of interest:
viewtopic.php?f=20&t=1551
viewtopic.php?f=30&t=298
viewtopic.php?f=29&t=1234
And... lots of others. Pretty much the whole forum is about the topic you're asking about: Getting a normal picture into the rift. That and using the head tracking aspect... and the nuances of developing games for VR.. and.... Well there's lots of fun stuff. I'm still waiting on my Rift, but I know I for one will be ordering a couple cameras here soon to mess around with. - ebo60Honored GuestThanks a bunch SubBoy, you answered my question perfectly! Those links were very helpful, particularly the second one. I suppose that I can rewrite their fragment shader to work on my image arrays, so things seem pretty doable (though not out of the box). Time to order!
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 1 year ago
- 2 years ago
- 7 months ago