Forum Discussion
Mid4
10 years agoHonored Guest
Using Depth Buffer Variables for Collision Detection
Hi,
So first some stuff about our project.
We use the Oculus Rift Sample Enviroment TinyRoom (SDK 0.5.0.1 in OpenGL mode). And use 2 Intel RealSense F200 Camera's attached on the front of the RIft to get the pictures and use it in our Virtual Enviroment.
One Camera is for the left eye and one camera for the right eye.
The intel Realsense is a Depth Camera so we can get the depth variables from every picture.
NOW THE PLAN IS!
Use the Data from the Depth buffer, to get the information of our Enviroment and compare it (yes i have to change the camera variables) with our camera depth variables for masking and collision detection.
BUT THE PROBLEM IS!!
I need help with the usage of the Depth variables of the Enviroment.
I dont really know how to get the Depth Variables of the Enviroment, and how to use it properly.
I was reading so much about all of this, but i am not really sure how to do it properly.
And because i dont really know how this will influence the performance of the program, so i decided to ask here.
We already have the Depth Data of the Camera's in our Programm, so we just need the data of the Depth Buffer of the Program.
I hope you guys can help me with this.
Thank you for your Help!!!
Tom
So first some stuff about our project.
We use the Oculus Rift Sample Enviroment TinyRoom (SDK 0.5.0.1 in OpenGL mode). And use 2 Intel RealSense F200 Camera's attached on the front of the RIft to get the pictures and use it in our Virtual Enviroment.
One Camera is for the left eye and one camera for the right eye.
The intel Realsense is a Depth Camera so we can get the depth variables from every picture.
NOW THE PLAN IS!
Use the Data from the Depth buffer, to get the information of our Enviroment and compare it (yes i have to change the camera variables) with our camera depth variables for masking and collision detection.
BUT THE PROBLEM IS!!
I need help with the usage of the Depth variables of the Enviroment.
I dont really know how to get the Depth Variables of the Enviroment, and how to use it properly.
I was reading so much about all of this, but i am not really sure how to do it properly.
And because i dont really know how this will influence the performance of the program, so i decided to ask here.
We already have the Depth Data of the Camera's in our Programm, so we just need the data of the Depth Buffer of the Program.
I hope you guys can help me with this.
Thank you for your Help!!!
Tom
6 Replies
- MatlockHonored GuestI assume you want to be able to compare or sync the depth buffer from the camera, and your virtual world, right?
You should make a physical rig with some objects in front of the camera at known distances from the real camera.
Then make a 3d scene with your objects, using the dimensions you record in the real world test rig. Measure the real world in meters, and make sure the units in your 3d model are also meters.
Then you make a 3d renderer where you can overlay the cameras image on top of your virtual world to compare.
For example if there is a cube in front of the real camera, you should basically see that same view in your virtual world.
You will then adjust your projection matrix until the images align. Basically field of view, and aspect ratio need to be adjusted. Good luck with that part!
Once your view looks correct, you would then make a custom pixelshader that will create a depth buffer. These are easy. The VertShader passes the vertex Z, and you basically scale it and write it to a texture in the PixelShader. You would play with that scale factor until it sort of matches the one generated by the camera.
At this point you have 2 compatible, comparable depth buffers, one for the real world, and one for the virtual world.
Cakewalk. :geek: - MrKaktusExplorerI've already done exactly what you described Mid4.
In 2013 I was using DK1 + Creative Senz 3D, then I've switched to DK2 + RealSenz3D.
Here is example application :
https://www.youtube.com/watch?v=QtrPuYeh_NY
(part interesting you from 0:46 but worth watching all).
What interest me, is if you were able to use BOTH RealSenz cameras at the same time.
According to Intel SDK is not supporting that. - MrKaktusExplorer
"matlock" wrote:
I assume you want to be able to compare or sync the depth buffer from the camera, and your virtual world, right?
You should make a physical rig with some objects in front of the camera at known distances from the real camera.
Then make a 3d scene with your objects, using the dimensions you record in the real world test rig. Measure the real world in meters, and make sure the units in your 3d model are also meters.
Then you make a 3d renderer where you can overlay the cameras image on top of your virtual world to compare.
For example if there is a cube in front of the real camera, you should basically see that same view in your virtual world.
You will then adjust your projection matrix until the images align. Basically field of view, and aspect ratio need to be adjusted. Good luck with that part!
Once your view looks correct, you would then make a custom pixelshader that will create a depth buffer. These are easy. The VertShader passes the vertex Z, and you basically scale it and write it to a texture in the PixelShader. You would play with that scale factor until it sort of matches the one generated by the camera.
At this point you have 2 compatible, comparable depth buffers, one for the real world, and one for the virtual world.
Cakewalk. :geek:
It doesn't work like this at all.
First of all Depth sensor and Color sensor are located in different places, so their frustums don't overlap each other but only intersect sharing part of 3D space. Second thing is the fact that both that sensors have different horizontal and vertical FOV's which means they have different angular coverage in samples (and we can choose from variety of resolutions and refresh rates we want to stream from them as well). Third is the fact that DK has also different FOV's which is much bigger than area sampled by depth sensor. None of them overalapp, you cannot simply blit/merge two depth buffers together. Any you don't want to do it anyway. Another point is that you can query that angles from SDK so you can exactly calculate size of frustum for both sensors in cm in real world. Using that and knowing exact sensor position related to HMD you can precisely reconstruct real objects in VR using sampled depth and color buffers. - Mid4Honored GuestThank you for all your replies!
So to be honest, the part where i need help is a different one than you thought.
I dont know how to get the data of the depth buffer in "TinyRoom" and how to store it correctly.
I am completely new to the depth buffer and never used it before, so this is where i need help.
About all the other things afterwards you dont have to worry. We already planed all of this and made already some stuff.
Thank you all for your help! - MatlockHonored GuestWhile mrkaktus has his foot in his mouth, I will mention again, the easiest way to get the depthbuffer of an opengl program is to make a special shader that writes the z value of the geometry to a texture. These zbuffer shaders are used all the time when making ambient occlusion shaders, and those shaders are all over the web.
- MrKaktusExplorer
"matlock" wrote:
While mrkaktus has his foot in his mouth, I will mention again, the easiest way to get the depthbuffer of an opengl program is to make a special shader that writes the z value of the geometry to a texture. These zbuffer shaders are used all the time when making ambient occlusion shaders, and those shaders are all over the web.
Hmm, enlighten me and please say where I've put my foot in my mouth?
By the way, you're wrong again. It's not an easiest way. There is completely no need to write "special shader" to write Z value to texture. You will get the same result by just binding texture with depth format to depth attachment of FBO. There is also no point in reading "back" depth of a 3D scene, as masking can be done on GPU side (leveraging parallelism), so it is easier and better to send RealSense depth buffer to GPU and just use it for rendering. You can even benefit from Z-Test to do that job partially for you if you will have proper preprocessing done.
But I assume you've already done that smart pants?
@mid4:
Are you able to use both F200 cameras at the same time with RS SDK?
Or are you using some other lower level SDK for that?
According to Intel support, they aren't supporting more than 1 sensors at the same time.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 10 months ago
- 10 months ago
- 8 months ago