Forum Discussion
GGGGGG
11 years agoHonored Guest
Render image/video from webcam to Rift
Hi, I am new to Oculus and don't have much experience in OpenGL either. I want to render the images that captured from webcam to Rift to create a 3D environment. I have done some research and...
jherico
11 years agoAdventurer
This topic is covered in chapter 13 of my book on Rift development. We also have examples of this kind of application in our github repository here.
Yes, you do need to use OpenGL or DirectX as attempting to render to the Rift with Just OpenCV is problematic to say the least.
You should break up the work into two threads
On the image capture thread you:
* grab an OpenCV images off the webcam
* grab the head pose for when the webcam image was captured (this can be tricky since by the time you get the image the head pose is likely in the past)
* convert the image into an OpenGL texture in a context that is shared with the main thread
* send the texture ID and head pose to a buffer for the rendering thread to grab
* loop back at the top
On the rendering thread you
* Check the texture/pose buffer for new data.
* If new data has arrived then throw away your previous texture/head pose and use the new one
* Render the texture into your OpenGL scene as a 2D surface, adjusting it's position based on the difference between the pose at which the image was capture and the current head pose while rendering.
The book covers this in significantly more detail.
Yes, you do need to use OpenGL or DirectX as attempting to render to the Rift with Just OpenCV is problematic to say the least.
You should break up the work into two threads
On the image capture thread you:
* grab an OpenCV images off the webcam
* grab the head pose for when the webcam image was captured (this can be tricky since by the time you get the image the head pose is likely in the past)
* convert the image into an OpenGL texture in a context that is shared with the main thread
* send the texture ID and head pose to a buffer for the rendering thread to grab
* loop back at the top
On the rendering thread you
* Check the texture/pose buffer for new data.
* If new data has arrived then throw away your previous texture/head pose and use the new one
* Render the texture into your OpenGL scene as a 2D surface, adjusting it's position based on the difference between the pose at which the image was capture and the current head pose while rendering.
The book covers this in significantly more detail.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 1 year ago
- 2 years ago
- 10 months ago