Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
gilad's avatar
gilad
Honored Guest
13 years ago

OpenCV integration

Hi All,

Has anybody had any experience with OpenCV?
I have 2 images (L&R) and I would like to apply the proper distortion on them as the final step of the image pipe.

Any help would be appreciated.

Gilad

10 Replies

  • DrGusta's avatar
    DrGusta
    Honored Guest
    I would really like to know how to do this as well so if you find anything let me know.
  • Hey,

    I'd like to ask if anyone already got an answer for this question.

    I have a quite similar Problem, that I want to display two camera streams on my rift and therefor I need the cameramatrix and the disortion matrix.

    Can anyone tell me where I can find em?

    Thanks

    Tobi
  • If you are looking for the code for the distortion shader you can look at this file:

    OculusSDK\Samples\CommonSrc\Render\Render_D3D1X_Device.cpp

    You can obtain the distortion parameters from the headset itself.
  • "kinkilla" wrote:

    I have a quite similar Problem, that I want to display two camera streams on my rift and therefor I need the cameramatrix and the disortion matrix.

    Can anyone tell me where I can find em?


    The camera matrix is specific to the webcam you're using. OpenCV has a variety of calibration tools that allow you to measure the camera matrix and apply it.

    For Rift distortion, you can fetch the required information out of the SDK and then either use OpenCV or OpenGL to render the image. For something like a webcam you're likely to be better off using OpenGL (assuming you want to render the final result) to distort the image as an OpenGL texture. I had an application that was doing something like this. I'll see if I can find it and put it online soon.
  • Well I wrote a mini programm, which captures two pics from two cameras (with the right eye-to-eye distance ofc)
    I undisort these pictures with the disortion and camera matrix of these cams (I measured them with opencv-tools)
    Then I draw a HUD into one of these pics (I think the left one) and at the end I want to display them at my oculus
    therefor I need to disort the pictures again (with cam-matrix and disortion parameters for the rift)

    I found a formula in the sdk-documentation for calculating the projection matrix then I can get my cam-matrix (I hope so haven't tried it yet)
    Then I still have the problem with the disortion parameters

    I'd like to stick to opencv for two reasons:
    first opengl isn't used at all in this project
    second I'm not really familiar with opengl :oops:

    cheers

    tobi
  • Here's an example of using OpenCV to do the distortion of an image. Note that this is designed to work with images that were rendered specifically for the Rift, and therefore have a specific aspect ratio. But it should serve as a starting point for what you need to do to apply a similar distortion to images fetched from webcams via OpenCV.

    I tried to combine the two distorted images back into a single image, but my OpenCV skills weren't up to the task. Attempting to create regions of interest and copy to them apparently just created new image objects rather than updating the ROI in the combined target image. But the key issue of calculating the distortion is there.
  • I'm currently at work so I cannot try it, but did this work without any intrinsics???
    because it seems you are only using the disortion, the extrinsics
    when you are using em you have to use the pinhole camera model, there you need both for a good projection, intrinsics and extrinsics.
  • I'm not sure I follow you. Once you've applied the calibration corrections for the camera source, the image should approximate the view from a pinhole camera, which is similar to the effect of a scene rendered using a 3D API like DirectX or OpenGL. At that point, all you need to do is apply the distortion.

    In theory you'd want to set up the distortion so that it renders the image to the portion of the screen that would correspond to the FOV of the source image. Unfortunately the SDK only seems to support setting (or rather achieving, since the SDK doesn't let you set the FOV directly) a vertical FOV at about 96 degrees.
  • Usually when I use opencv with a captured image, I undisort them with cv::initUndistortRectifyMap and then cv::remap

    Here I'm doing the same, after this I have a pinhole picture
    Then I draw a HUD into one image

    To get the images to the rift I want to do the same, but the function cv::initUndistortRectifyMap needs the intrinsics and the extrinsics.

    "jherico" wrote:
    Once you've applied the calibration corrections for the camera source, the image should approximate the view from a pinhole camera


    Well that sounds pretty reasonable :D
    I will use your code and try it over the weekend