Forum Discussion
mka
11 years agoExplorer
Re mouse picking
The program I have developed for the Rift (which uses OpenGL) needs the ability to pick objects
with the mouse. What one needs is a map M which maps pixel coordinates (x,y) on the Window or
monitor screen to pixel coordinates (x',y') in the texture buffer (pRenderTargetTexture of
renderDevice). In DK 1 with client distortion rendering, I defined this mapping M myself using the
information we were given about the distortion map. However, the recommended approach
for DK 2 is to let the SDK do the distortion rather than do client distortion rendering. If the distortion
is going to be hidden in the SDK, then it seems to me that the SDK should also supply the map M.
My question therefore is: Does the Oculus development team have plans to do this for the SDK?
(I am not aware of the map M being available currently.)
At the moment I will write the map M for DK 2 myself on the assumption that the mathematics
for the distortion has not changed. But who knows what kind of distortion maps will be used in
future SDK's and it would therefore be inconsistent to recommend SDK distortion rendering while
expecting developers to write the map M themselves.
Hopefully, there is someone from the Oculus development team that can answer this question!
with the mouse. What one needs is a map M which maps pixel coordinates (x,y) on the Window or
monitor screen to pixel coordinates (x',y') in the texture buffer (pRenderTargetTexture of
renderDevice). In DK 1 with client distortion rendering, I defined this mapping M myself using the
information we were given about the distortion map. However, the recommended approach
for DK 2 is to let the SDK do the distortion rather than do client distortion rendering. If the distortion
is going to be hidden in the SDK, then it seems to me that the SDK should also supply the map M.
My question therefore is: Does the Oculus development team have plans to do this for the SDK?
(I am not aware of the map M being available currently.)
At the moment I will write the map M for DK 2 myself on the assumption that the mathematics
for the distortion has not changed. But who knows what kind of distortion maps will be used in
future SDK's and it would therefore be inconsistent to recommend SDK distortion rendering while
expecting developers to write the map M themselves.
Hopefully, there is someone from the Oculus development team that can answer this question!
14 Replies
- cyberealityGrand ChampionInstead of 2D screen coordinates, can you create a virtual laser pointer and use that for picking (i.e. ray-cast)?
- mkaExplorerYes, I could form a ray from the eye given a point on the screen and check to see what it hits in my world,
but I have potentially a lot of complicated objects in my world and do not want to have to check the ray
against each of them.
I noticed that the files ovr_stereo.h/cpp contain a function
// A set of "reverse-mapping" functions, mapping from real-world and/or texture space back to the framebuffer.
Vector2f TransformTanFovSpaceToScreenNDC( DistortionRenderDesc const &distortion,
const Vector2f &tanEyeAngle, bool usePolyApprox /*= false*/ )
Is this a function I could use? There is little documentation and I am not sure I am interpreting the terminology
correctly. For example, what does "tanEyeAngle" mean? Is the input/output in NDC (normalized device coordinates)? - rjoyceHonored GuestWhy not just render a mouse pointer pre distortion and keep all the info on your side of the distortion?
- HartLabsHonored GuestEven when using a regular monitor you have to cast a ray and find what object it collides with for mouse selection to work in a 3D rendered scenes.
- mkaExplorer1. If you don't know how things are distorted, I don't know how keeping track of a cursor before the distortion
would enable you to determine what it visually points to.
2. One does not have to cast a ray for picking. I color-encode my objects and render them to a back buffer.
Then all I have to do is check the color of the pixel I pointed at. - HartLabsHonored GuestAh OK, it was hard to tell what your technical requirements/savy. From your initial post it seemed like you had never implemented this type of thing before outside of something like a grid based 2D system.
Can I ask why you want to use the system mouse not a soft mouse in 3D in your program? Seems like the system mouse will quickly lead to trouble. For instance thinking it is in front of an object on the left eye when it is actually in front of an object on the right. I have this problem all the time while moving windows off of the Oculus in extended. I can understand you not wanting to have to rewrite your current picker but even with the distortion matrix you would have the left/right side problem. - mkaExplorerNo, I'm not new to graphics programming, HartLabs. I was a computer science professor before I retired.
Currently I am working on a program where one traverses a 3-dimensional manifold.
Since one is only seeing one object, one can just restrict the system mouse to the left view window
and pick from the left eye view. Actually, I also have a mode where I draw 2 mice. In addition to the system
mouse (for simplicity I show it in the shape of a '+'), I draw a dual in the right view window. This looks
better, but it's not easy under Windows and OpenGL and I may rethink my approach to letting a user
interactively pick an object.
What precisely are you referring to when you say "soft mouse"? - HartLabsHonored GuestBy soft mouse I mean what rjoyce suggested and what it sounds like you are already doing with your second mouse. Replace the system mouse with a '+' and you can still pick using it's x,y position and the texture you are passing to the distortion shader and it will remain accurate. It may move a bit weirdly/slowly in corners of vision, but functionally if people are looking at what they want to move it may be unnoticeable. The other side of that is picking post distortion would cause objects to move very quickly in corners, and the mouse can be moved outside of the view-able bounds.
It's hard to say how this would all apply to your program without seeing it and having more details though. - mkaExplorerOk, I see what you are saying: ignore the system mouse entirely. I'll think about that.
That still leaves my original question, what about the inverse map from post to pre distortion?
The Rift developers have the map, so why not make it available to users?
And what about the TransformTanFovSpaceToScreenNDC map I mentioned earlier? Can someone
clarify it and its parameters? - HartLabsHonored GuestHow were you generating your inverse distortion map before? Was it after the transition to a mesh based distortion?
What specific data are you asking for when you say the developers have the map, the mesh coordinates? Isn't that returned by ovrHmd_CreateDistortionMesh? I haven't played around with it at all and I guess am a bit confused about your process, the data needed, and what necessary information is hidden in the SDK.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 8 months ago
- 5 months ago