Forum Discussion
deleted-0912141
11 years agoHonored Guest
Webcam callibration issues
Hello,
we´re building a system using a webcam, at the position of (let´s say the right eye, one is sufficient for us) capturing an image and showing it at the rift. This should be as realistic as possible, meaning the person who´s wearing the system should be able to grab for example a mug or shake another persons hand.
To do so we carefully chose the webcam and the lens and adjusted the angle of the captured imaged to be the same as the actual fov of the rift.(by the way we´re using dk1 with sdk 0.2).
To match the rifts fov we had to apply a wide angle camera lens on instead of the actual lens of the original webcam lense.
Unfortunately this gives us some distortion we´d like to correct.
We got our distortion coefficients using opencv but we´re not sure what´s the best method implementing the lens correction. Using opencv would mean to perform the lens correction on every frame, which might introduce a lot of latency into our system and make it really difficult for the user to grab objects. On the other hand, this method ist quite straight forward.
Wouldn´t it be more efficient to create another shader and put it on a texture, similar to the shader-lookup-distortdone in "oculus rift in action", and then perform the rift´s lens correction afterwards?
What would be the best attempt to implement such an OpenGL shader on base of the distortion coefficients we already got?
We belief that in this way the calculations would only have to be done once instead of correcting every frame. Is this true?
As I mentioned we´re currently using dk1 in combination with sdk 0.2, but because of the limited resolution we definitely have to go for dk2. Do we have to switch to sdk 0.3 though or will this also work with sdk0.2?
thanks for your help.
I
we´re building a system using a webcam, at the position of (let´s say the right eye, one is sufficient for us) capturing an image and showing it at the rift. This should be as realistic as possible, meaning the person who´s wearing the system should be able to grab for example a mug or shake another persons hand.
To do so we carefully chose the webcam and the lens and adjusted the angle of the captured imaged to be the same as the actual fov of the rift.(by the way we´re using dk1 with sdk 0.2).
To match the rifts fov we had to apply a wide angle camera lens on instead of the actual lens of the original webcam lense.
Unfortunately this gives us some distortion we´d like to correct.
We got our distortion coefficients using opencv but we´re not sure what´s the best method implementing the lens correction. Using opencv would mean to perform the lens correction on every frame, which might introduce a lot of latency into our system and make it really difficult for the user to grab objects. On the other hand, this method ist quite straight forward.
Wouldn´t it be more efficient to create another shader and put it on a texture, similar to the shader-lookup-distortdone in "oculus rift in action", and then perform the rift´s lens correction afterwards?
What would be the best attempt to implement such an OpenGL shader on base of the distortion coefficients we already got?
We belief that in this way the calculations would only have to be done once instead of correcting every frame. Is this true?
As I mentioned we´re currently using dk1 in combination with sdk 0.2, but because of the limited resolution we definitely have to go for dk2. Do we have to switch to sdk 0.3 though or will this also work with sdk0.2?
thanks for your help.
I
2 Replies
- cyberealityGrand ChampionIf you want to support the DK2 you need to be using SDK 0.3.x. Even right now, I don't think every feature is finished in 0.3.1 (it's a preview release) the API should be pretty close to what the later releases will be. So you will still likely have to update to 0.3.2 (or whatever) when it comes out, but the amount of code changes should be minimal.
The way the new SDK works, the Oculus distortion is done with a mesh, not in a pixel shader. You are responsible to supply two render targets (or one with the combined left/right views) and the SDK will render this to a pre-distorted mesh. If you have video data you need to composite onto the view, you would have to use something like a pixel shader to made this look normal (undistorted) first, and then mix it with the 3D content. After that point the SDK will handle making it look correct for the Rift. - jhericoAdventurer
"cybereality" wrote:
The way the new SDK works, the Oculus distortion is done with a mesh, not in a pixel shader. You are responsible to supply two render targets (or one with the combined left/right views) and the SDK will render this to a pre-distorted mesh.
Technically speaking it's done with both the mesh and a vertex/pixel shader."cybereality" wrote:
If you have video data you need to composite onto the view, you would have to use something like a pixel shader to made this look normal (undistorted) first, and then mix it with the 3D content. After that point the SDK will handle making it look correct for the Rift.
Alternatively, if you know the FOV of the video / camera source, you can create an ovrFovPort that matches it, and use that to populate the appropriate field of ovrEyeDesc when you set up rendering for the Rift. You could then pass in texture ids containing images directly from the capture source. This should in theory cause the Rift to render the video the appropriate area on the center of the screen, with the appropriate distortion. However, I don't actually recommend this approach, because the latency of webcams means you typically need to place the webcam image into a 3D scene with a small amount of offset to account for the distance your head has moved between the time of the image capture and the time
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 1 year ago
- 1 year ago
- 1 year ago
- 1 year ago