Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
mmostajab's avatar
mmostajab
Honored Guest
10 years ago

Problems after updating from Oculus API 0.4.2 to 0.4.4

I have implemented my own object viewer using the Oculus Runtime Drive 0.4.2. Today I have updated my driver and I am using the new API. So, I changed three things in my code:

OvrGLConfig.Config.Header.RTSize -> OvrGLConfig.Config.Header.BackBufferSize
OvrEyeRenderDesc.ViewAdjust -> OvrEyeRenderDesc.HmdToEyeViewOffset
ovrHmd_GetEyePose -> ovrHmd_GetHmdPosePerEye


But Now, the aspect ratio of output windows are changed and the application is not working as it was before. Anyone has the same experience or can help me with this bug?


9 Replies

  • I believe more has changed in the API aside from those 3 things.

    You should compare the code in the examples to your code to see how to do it.
  • So, the application is compiling now.

    The problem is how can I find which function is working differently now :(
  • "mmostajab" wrote:
    So, the application is compiling now.

    The problem is how can I find which function is working differently now :(


    Based on your screen image, I suspect that the information going into the ovrHmd_EndFrame call is wrong. I would suggest you put a breakpoint there and directly examine the values of the eye texture structures that you are passing.

    If you can post the values, and/or post the code you're using the initialize the structures, I might be able to shed a little more light on what's happening.
  • so, the following codes are used to initialize the hmd, and different values an to render the scene:

    bool initializeHmd()
    {
    if (!ovr_Initialize()) {
    dprintf("HMD: global initialization failed!");
    return false;
    }

    const int numberOfDevices = ovrHmd_Detect();

    if (numberOfDevices > 0) {
    dprintf("HMD: Number of connected devices: %i",
    numberOfDevices);
    dprintf("Attach to device with id=0");
    hmd = ovrHmd_Create(0);
    } else {
    dprintf("HMD: Using emulated device");
    hmd = ovrHmd_CreateDebug(ovrHmd_DK2);
    }

    if (!hmd) {
    dprintf("HMD: No Device could be created!");
    return false;
    }

    std::cout << ovrHmd_GetLastError(NULL);

    // check whether we are in extend-desktop mode:
    // SRGB is not supported in direct-app mode !
    if (!(hmd->HmdCaps & ovrHmdCap_ExtendDesktop)) {
    settings->Readonly.ExtendDesktop = false;
    settings->Readonly.SRGB = false;
    }

    std::cout << ovrHmd_GetLastError(NULL);

    dprintf("HMD: Product '%s'", hmd->ProductName);
    dprintf("HMD: Manufacturer '%s'", hmd->Manufacturer);
    return true;
    }

    void updateDeviceCaps()
    {
    if (!hmd) {
    dprintf("updateHMDDeviceCaps: hmd is nullptr!");
    return;
    }

    unsigned int caps = 0;
    if (!settings->Device.MirrorToWindow)
    caps |= ovrHmdCap_NoMirrorToWindow;
    if (!settings->Device.Display)
    caps |= ovrHmdCap_DisplayOff;
    if (settings->Device.LowPersistence)
    caps |= ovrHmdCap_LowPersistence;
    if (settings->Device.DynamicPrediction)
    caps |= ovrHmdCap_DynamicPrediction;
    if (!settings->Device.VSyncEnabled)
    caps |= ovrHmdCap_NoVSync;

    ovrHmd_SetEnabledCaps(hmd, caps);
    };

    void updateRenderingCaps()
    {
    if (!hmd) { return; }

    #ifdef WIN32
    if (!ovrHmd_AttachToWindow(hmd, (void*) Base::winId(), 0, 0)) {
    dprintf("HMD: Could not attach to window!\n");
    return;
    }
    dprintf("HMD: connected to window with id HWND=%i", Base::winId());
    #endif

    ovrGLConfig* config = (ovrGLConfig*) &apiConfig;
    config->Config.Header.API = ovrRenderAPI_OpenGL;
    config->Config.Header.BackBufferSize = OVR::Sizei(Base::width(), Base::height());
    config->Config.Header.Multisample = settings->Rendering.Multisample ? 1 : 0;
    config->OGL.Window = (HWND) Base::winId();
    config->OGL.DC = 0;
    config->OGL.Header.API = ovrRenderAPI_OpenGL;
    config->OGL.Header.BackBufferSize = OVR::Sizei(Base::width(), Base::height());
    config->OGL.Header.Multisample = settings->Rendering.Multisample ? 1 : 0;

    // if we have valid fbos, we also update our render target multisampling
    for (int i = 0; i < ovrEye_Count; ++i)
    if (fbo[i]) fbo[i]->setEnableMultiSample(settings->Rendering.Multisample);

    unsigned int caps = 0;

    if (settings->Rendering.Chromatic)
    caps |= ovrDistortionCap_Chromatic;
    if (settings->Rendering.TimeWarp)
    caps |= ovrDistortionCap_TimeWarp;
    if (settings->Rendering.Vignette)
    caps |= ovrDistortionCap_Vignette;
    if (!settings->Rendering.Restore)
    caps |= ovrDistortionCap_NoRestore;
    if (settings->Rendering.FlipInput)
    caps |= ovrDistortionCap_FlipInput;
    if (settings->Rendering.Overdrive)
    caps |= ovrDistortionCap_Overdrive;
    if (settings->Rendering.HqDistortion)
    caps |= ovrDistortionCap_HqDistortion;
    if (settings->Readonly.SRGB)
    caps |= ovrDistortionCap_SRGB;

    if (!ovrHmd_ConfigureRendering(
    hmd, &apiConfig, caps,
    hmd->DefaultEyeFov, eyeRenderDesc)) {
    dprintf("OVR: ConfigureRendering failed!");
    return;
    }

    // Calculate projections
    for (int i = 0; i < ovrEye_Count; ++i) {
    const float Near = settings->Rendering.Near;
    const float Far = settings->Rendering.Far;

    projection[i] = ovrMatrix4f_Projection(
    eyeRenderDesc[i].Fov, Near, Far, true);

    const float orthoDistance = settings->Rendering.OrthoDistance; // 2D is 0.8 meter from camera
    const ovrVector2f orthoScale =
    OVR::Vector2f(1.0f) / ovrVector2f(eyeRenderDesc[i].PixelsPerTanAngleAtCenter);

    orthoProjection[i] = ovrMatrix4f_OrthoSubProjection(
    projection[i], orthoScale, orthoDistance, eyeRenderDesc[i].HmdToEyeViewOffset.x);

    std::cout << "See: " << ovrHmd_GetLastError(NULL) << std::endl;

    }
    dprintf("OVR: ConfigureRendering successful");

    }



    void initializeGL()
    {
    if (!hmd) { return; }

    #ifdef WIN32
    if (DwmEnableComposition(DWM_EC_DISABLECOMPOSITION) != S_OK)
    dprintf("Cannot disable the DWM Composition\n");
    #endif

    dprintf("HMDView: initializeGL");
    if (pwin) {
    pwin->init();
    }

    // Initialize eye rendering information for ovrHmd_Configure.
    // The viewport sizes are re-computed in case RenderTargetSize changed due to HW limitations.
    ovrFovPort eyeFov[2];
    eyeFov[0] = hmd->DefaultEyeFov[0];
    eyeFov[1] = hmd->DefaultEyeFov[1];

    const float DesiredPixelDensity = 1.0f;
    eyeRenderSize[0] = ovrHmd_GetFovTextureSize(hmd, ovrEye_Left, eyeFov[0], DesiredPixelDensity);
    eyeRenderSize[1] = ovrHmd_GetFovTextureSize(hmd, ovrEye_Right, eyeFov[1], DesiredPixelDensity);

    std::cout << "\t h =" << eyeRenderSize[0].w << " w = " << eyeRenderSize[0].h << std::endl;

    // createRenderTargetsOpenSG must take place before createSceneGraph(),
    // since the created FBOs are used as RTs within the stages
    createRenderTargetsOpenSG();
    createSceneGraph();

    // create textures
    for (int i = 0; i < ovrEye_Count; ++i) {
    ovrGLTextureData* texData = (ovrGLTextureData*) &eyeTexture[i];
    texData->Header.API = ovrRenderAPI_OpenGL;
    texData->Header.TextureSize = eyeRenderSize[i];
    texData->Header.RenderViewport = OVR::Recti(eyeRenderSize[i]);
    texData->TexId = 0; // not yet known
    }

    }

    void resizeGL(int width, int height)
    {
    dprintf("HMDView: resize to %ix%i", width, height);
    if (pwin) {
    pwin->resize(width,height);
    }
    updateRenderingCaps();
    }

    void paintGL()
    {
    hmdFrameTiming = ovrHmd_BeginFrame(hmd, 0);

    ovrTrackingState trackState
    = ovrHmd_GetTrackingState(hmd, hmdFrameTiming.ScanoutMidpointSeconds);

    // Update the tracking state and corresponding eye poses
    headPose = trackState.HeadPose.ThePose;
    for (int i = 0; i < ovrEye_Count; ++i)
    {
    const ovrEyeType eye = (ovrEyeType) i;
    eyeRenderPose[i] = ovrHmd_GetHmdPosePerEye(hmd, eye);
    }

    // we then update the cameras, e.g. projection/modelview matrices
    updateCameras();

    // Fire the scene into our stereo FBOs..
    pwin->render(rtaction);

    // This is crucial: we set the OpenGL handles AFTER the render action
    for (int i = 0; i < ovrEye_Count; ++i) {
    ovrGLTextureData* texData = (ovrGLTextureData*) &eyeTexture[i];
    texData->TexId = (GLuint) pwin->getGLObjectId(sceneTex[i]->getGLId());
    }

    // And present the textures...
    ovrHmd_EndFrame(hmd, eyeRenderPose, eyeTexture);
    }
  • "mmostajab" wrote:
    so, the following codes are used to initialize the hmd, and different values an to render the scene:
    ...


    Generally speaking that all looks OK to me. However I would still go through the steps of either adding a breakpoint to the ovrHmd_EndFrame method and findout out what the texture viewports and sizes are set to. Additionally, if that doesn't reveal anything, I would add an additional code path that renders the scene content to the framebuffer and then re-renders it to the screen, without using any distortion. Essentially you just need to drop the begin/end frame calls and render the textures to full screen quads with the viewports set to the right and left halves of the screen. If the textures look good that way you know that the issue is in the values you're passing to the distortion function. If the textures don't look good, then you know that the problem is with your rendering code, possibly related to how you're handling the poses or viewports.

    If you do decide to try that latter approach, bear in mind you cannot use the getEyePosePerFrame method, as it can only be called between begin/end frame. Instead you should in general be using the getEyePoses to fetch both eyes at once. This can be done at any time on any thread, and doesn't rely on the begin/end state.
  • Thanks for your reply.

    I checked ovrHmd_EndFrame parameters. The values are identical to what I have in oculus sample application. Also, I rendered the textures to the screen. The output is as I expect. So, the distortion function is the problem but the thing is I checked all the things that can be related to the distortion function but nothing is changed. It seems that the output texture resolution is very very low, while the render description has identical resolution to what we have in the sample projects.

    Can you please help me to find out what is the problem?
  • I understood if I simply divide the recommended texture size by 2 and use it for my textures, the warping and distortion will work fine. Do you have any idea why is it happening? and how can I fix it?
  • YEAAAAH!

    I found the bug. It is really a tiny bug but it took me two days to fix. Simplly, I was stupid that I was using the GL_LINEAR_MAP_LINEAR minification filter while the image pyramid was not filled. So, I just changed the minification filter to GL_LINEAR. And Now Everything is working fine.

    The interesting thing for me is that how it was working before with Oculus API 0.4.2??!?!:)
  • In 0.4.2 they might have been using code to pull from the highest LOD all the time.