Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
rkkonrad's avatar
rkkonrad
Explorer
10 years ago

Get world coordinates from screen coordinates

Hey there,

I am working on a program that requires knowledge of depth information of a particular point on the screen. Currently, I want to get this point through a mouse click on the screen. Because I am not accounting for the distortion and my point in world coordinates is wrong. Is there a method to undo this distortion, so that when I click on the screen I am able to recover the mouse click in coordinates of the undistored image.

Thank a lot!

5 Replies

  • "rkkonrad" wrote:
    I am working on a program that requires knowledge of depth information of a particular point on the screen. Currently, I want to get this point through a mouse click on the screen. Because I am not accounting for the distortion and my point in world coordinates is wrong. Is there a method to undo this distortion, so that when I click on the screen I am able to recover the mouse click in coordinates of the undistored image.


    The standard answer given by Oculus is not to use the 2D mouse coordinates, but to use a pointing device to project a ray into the scene, and then use that ray to determine what's being hit. Whether or not this is applicable to your situation isn't obvious.

    However you should bear in mind that converting 2D screen coordinates into world coordinates isn't necessarily a simple matter of inverting the distortion function. Remember that if you have timewarp active, the image projected on the screen will be both distorted and slightly rotated to account for head movement in the interval between rendering the content and displaying it on the screen. Even if you write the code necessary for doing both inverse transformations, getting the exact time warp matrices used by the most recent frame render might be tricky.
  • So by pointing device could I still use a mouse? I will have to read up on ray casting in that case.
    The point of my application is to simulate retinal blurring when rendering. I need to grab the depth of the point that a person is looking at (simulated by a mouse click) and apply an appropriate blur to the scene. So I guess what I am really looking for is the depth of a point in the scene. Would ray casting be the way to approach this?
  • "rkkonrad" wrote:
    So by pointing device could I still use a mouse? I will have to read up on ray casting in that case.
    The point of my application is to simulate retinal blurring when rendering. I need to grab the depth of the point that a person is looking at (simulated by a mouse click) and apply an appropriate blur to the scene. So I guess what I am really looking for is the depth of a point in the scene. Would ray casting be the way to approach this?


    Yes, pretty much. You can get the depth of the point a person is looking (or rather, the point directly in front of them, regardless of where their eyes are pointing) by simply taking a Z axis vector and composing it with the head pose. This will give you the vector to use to find what scene object is being looked at.

    Alternatively you could solve this by doing an extra render of depth only information and using client side distortion. Essentially you'd use the SDK distortion as normal when rendering, but when the user clicks a mouse, you could do an additional 'depth only' render, perform the distortion on the depth texture (to a framebuffer), and then read the depth value back out of the framebuffer.

    Neither approach is quick and easy. With the raycasting approach you need to find a way to represent the current ray in the scene so the user can manipulate it with the mouse, and you need a way to iterate efficiently over your scene objects in order to find out the closest intersecting object, the intersection point, and the exact depth.
  • Wow great! Thanks a lot for the help! I have a few other ideas for doing this as well, but like you said none of them are quick and easy. I was hoping that there would be a more straightforward approach. Great job on the blog by the way. It has really helped me move into this oculus domain.