Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
swinginbird's avatar
swinginbird
Honored Guest
11 years ago

Map and lock mouse cursor on surfaces

Hello everyone,

what I'm basically trying to achieve is a behaviour similar to the demo scene accessible through Rift's Configuration Utility:


When the mouse cursor is moved within the application's window, it is mapped to the virtual screen in distant.

Currently, I have no idea how to realize such behaviour. Where can I get the cursor position relative to the app window?

Any hints are much appreciated. Thanks in advance.

3 Replies

  • lamour42's avatar
    lamour42
    Expert Protege
    Usually you don't need the mouse position relative to the app window. I think it would be far easier to track mouse movements and translate this to movement in your virtual environment. In non-rift 3d applications mouse movement is usually used to control your view into the 3d world. The same principle can be applied to control a 3d cursor.

    Tracking mouse movements can be done in several way too. I use the raw input device for this. (Look up e.g. RegisterRawInputDevices in Windows SDK)

    Trying to revert 2d mouse pointer coordinates back into your virtual environment is of course also possible, but mathematically more complicated. Basically you have to reverse the usual projection from 3d coordinates to 2d view plane. Going from 2d point in your window to 3d will give you a line in 3d, not a point. So you will need to determine a meaningful depth coordinate by some other means.

    Mouse position relative to window frame can be determined by Windows API, e.g.GetCursorPos which gives screen coordinates, then translate this to relative coordinates by ScreenToClient.
  • Hello lamour42, thank you so much for this detailed reply!

    "lamour42" wrote:
    Usually you don't need the mouse position relative to the app window. I think it would be far easier to track mouse movements and translate this to movement in your virtual environment. In non-rift 3d applications mouse movement is usually used to control your view into the 3d world. The same principle can be applied to control a 3d cursor.


    The reason I wanted to go with the mouse position relative to the app window is because of the following thoughts: Imagine for instance the app running on a resolution of 1920x1200. Then I intended to use a texture of the same dimensions and attach it on the object's surface (stretched/compressed to fit exactly the shape of the surface). The only thing left to do would be: Take the position of the mouse cursor (relative to the app window, maybe starting with (0,0) in the lower left corner) and draw a mouse cursor like thing at the exact same position on the texture which is attached to the object's surface. Is there an argument against this idea? Because it seems fairly straightforward to me or did I miss something?

    "lamour42" wrote:
    Tracking mouse movements can be done in several way too. I use the raw input device for this. (Look up e.g. RegisterRawInputDevices in Windows SDK)


    I am a bit confused about how to access the Windows API through Unity3D. Can you point me to a resource where I can achieve this the proper way?

    "lamour42" wrote:
    Trying to revert 2d mouse pointer coordinates back into your virtual environment is of course also possible, but mathematically more complicated. Basically you have to reverse the usual projection from 3d coordinates to 2d view plane. Going from 2d point in your window to 3d will give you a line in 3d, not a point. So you will need to determine a meaningful depth coordinate by some other means.

    Mouse position relative to window frame can be determined by Windows API, e.g.GetCursorPos which gives screen coordinates, then translate this to relative coordinates by ScreenToClient.


    Just to clearify things up: I don't want to work with the mouse in 3D space at all. I want the mouse pointer to be bound on a 2D surface.
  • I'm doing something like this in my Shadertoy VR app. I have a set of code that renders the Qt UI into a texture. I also track mouse events and keep a variable that records the current mouse position in normalized coordinates.

    This is the code on the Rift rendering window that tracks the mouse position...


    void mouseMoveEvent(QMouseEvent * me) {
    // Make sure we don't show the system cursor over the window
    qApp->setOverrideCursor(QCursor(Qt::BlankCursor));
    // Interpret the mouse position as NDC coordinates
    QPointF mp = me->localPos();
    mp.rx() /= size().width();
    mp.ry() /= size().height();
    mp *= 2.0f;
    mp -= QPointF(1.0f, 1.0f);
    mp.ry() *= -1.0f;
    mousePosition.store(mp);
    QRiftWindow::mouseMoveEvent(me);
    }


    During rendering, I create a special framebuffer for compositing the UI image and a texture containing a mouse cursor:



    uiFramebuffer->Bound([&] {
    Context::Clear().ColorBuffer();
    oria::viewport(UI_SIZE);
    // Clear out the projection and modelview here.
    Stacks::withIdentity([&] {
    glBindTexture(GL_TEXTURE_2D, currentUiTexture);
    oria::renderGeometry(plane, uiProgram);

    // Render the mouse sprite on the UI
    QPointF mp = mousePosition.load();
    mv.translate(vec3(mp.x(), mp.y(), 0.0f));
    mv.scale(vec3(0.1f));
    mouseTexture->Bind(Texture::Target::_2D);
    oria::renderGeometry(mouseShape, uiProgram);
    });
    });


    Notice that I'm using identity matrices when I render the UI texture to the framebuffer, so it exactly covers the surface of the attached texture. But when I render the mouse I translate to the mouse position, and then scale down to 0.1 so that the mouse texture only takes up a tiny portion of the screen. The texture has to be set up so the 'point' of the cursor is in the exact center of the texture, which I do by creating a texture 4 times the size of the mouse cursor image I load and then using glSubImage2d to load the mouse image to the lower right corner

    Then I rebind that framebuffer texture and render it to the screen:


    oria::viewport(textureSize());
    mv.withPush([&] {
    mv.translate(vec3(0, 0, -1));
    uiFramebuffer->BindColor();
    oria::renderGeometry(uiShape, uiProgram);
    });


    The result is something similar to this video:


    Although that's actually an earlier implementation where I was rendering the UI to an image outside of OpenGL and compositing the mouse in there, and then copying the resulting image to OpenGL, which was significantly slower.