Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
GGGGGG's avatar
GGGGGG
Honored Guest
11 years ago

Render image/video from webcam to Rift

Hi,

I am new to Oculus and don't have much experience in OpenGL either.

I want to render the images that captured from webcam to Rift to create a 3D environment.

I have done some research and encounter a lot of terms that I am not familiar with (SDL, GLFW etc.).

So far I have a rough idea of how to do it:
1. Grab the images from webcam using OpenCV
2. Convert the images into OpenGL texture
3. SOMEHOW render it to Oculus

But I still don't have a clear clue how to do it, so any help/hints are very much appreciated (what library/tools should I use, any easy-to-understand examples...)

Thank you for your time!

9 Replies

  • Check out the Oculus SDK Samples and the Developer Guide.pdf.

    Is opengl a requirement? It doesn't sound like you need OpenGL, unless you're not on Windows.
  • GGGGGG's avatar
    GGGGGG
    Honored Guest
    "rwblodgett" wrote:
    Check out the Oculus SDK Samples and the Developer Guide.pdf.

    Is opengl a requirement? It doesn't sound like you need OpenGL, unless you're not on Windows.


    Really? No OpenGL is not a requirment, the reason why I have this idea is because I am trying to follow the Developer Guide for rendering, and it is talking about either Direct3D 11 or OpenGL.

    So how should I approach the rendering without OpenGL, do you have some hints or examples on this?

    Thank you for your time and help
  • GGGGGG's avatar
    GGGGGG
    Honored Guest
    "cybereality" wrote:
    It would probably be easier if you used a game engine like Unity.


    This implementation needed to be merged with another GUI which is developed using C++ in Visual Studio, so is it possible to merge if I use Unity?

    Thank you for your time and help.
  • "GGGGGG" wrote:
    "rwblodgett" wrote:
    Check out the Oculus SDK Samples and the Developer Guide.pdf.

    Is opengl a requirement? It doesn't sound like you need OpenGL, unless you're not on Windows.


    Really? No OpenGL is not a requirment, the reason why I have this idea is because I am trying to follow the Developer Guide for rendering, and it is talking about either Direct3D 11 or OpenGL.

    So how should I approach the rendering without OpenGL, do you have some hints or examples on this?

    Thank you for your time and help

    Oh, well, you have to choose either opengl or direct3d. You don't really need to know either of these. If you follow the section on SDK distortion rendering (Section 8.2), and you already have the two images you need, one for each eye, then a lot of the work is done for you.

    With the opengl context, you'll just have to create a couple of textures. glGenTextures will generate texture ids, then you bind the texture with glBindTexture, and then you allocate the memory and pass the bytes for it using glTexImage2D. You just have to supply the texture ids to the oculus.
  • GGGGGG's avatar
    GGGGGG
    Honored Guest
    "rwblodgett" wrote:
    "GGGGGG" wrote:
    "rwblodgett" wrote:
    Check out the Oculus SDK Samples and the Developer Guide.pdf.

    Is opengl a requirement? It doesn't sound like you need OpenGL, unless you're not on Windows.


    Really? No OpenGL is not a requirment, the reason why I have this idea is because I am trying to follow the Developer Guide for rendering, and it is talking about either Direct3D 11 or OpenGL.

    So how should I approach the rendering without OpenGL, do you have some hints or examples on this?

    Thank you for your time and help

    Oh, well, you have to choose either opengl or direct3d. You don't really need to know either of these. If you follow the section on SDK distortion rendering (Section 8.2), and you already have the two images you need, one for each eye, then a lot of the work is done for you.

    With the opengl context, you'll just have to create a couple of textures. glGenTextures will generate texture ids, then you bind the texture with glBindTexture, and then you allocate the memory and pass the bytes for it using glTexImage2D. You just have to supply the texture ids to the oculus.


    Oh thank you so much for the detailed explaination!! I will go down this path then :D One more question is that what if I only have one image instead of two, should I do some pre-processing to generate two images (if possible) for left and right eyes?
  • For the stereo render, you would typically use 2 cameras spaced slightly apart to create the 3D effect. There are ways to reproject a 2D image into 3D using the depth buffer, but this creates artifacts and is not recommended.
  • "GGGGGG" wrote:


    Oh thank you so much for the detailed explaination!! I will go down this path then :D One more question is that what if I only have one image instead of two, should I do some pre-processing to generate two images (if possible) for left and right eyes?

    I would use two web cams and two images if possible. If not, then I don't know. There is a section on using the same image for both eyes, but this may cause you to lose some of the vr experience. Or, yeah, there are reprojection techniques as cybereality mentioned.
  • This topic is covered in chapter 13 of my book on Rift development. We also have examples of this kind of application in our github repository here.

    Yes, you do need to use OpenGL or DirectX as attempting to render to the Rift with Just OpenCV is problematic to say the least.

    You should break up the work into two threads

    On the image capture thread you:
    * grab an OpenCV images off the webcam
    * grab the head pose for when the webcam image was captured (this can be tricky since by the time you get the image the head pose is likely in the past)
    * convert the image into an OpenGL texture in a context that is shared with the main thread
    * send the texture ID and head pose to a buffer for the rendering thread to grab
    * loop back at the top


    On the rendering thread you
    * Check the texture/pose buffer for new data.
    * If new data has arrived then throw away your previous texture/head pose and use the new one
    * Render the texture into your OpenGL scene as a 2D surface, adjusting it's position based on the difference between the pose at which the image was capture and the current head pose while rendering.

    The book covers this in significantly more detail.