cancel
Showing results for 
Search instead for 
Did you mean: 

OpenGL Direct Rendering CLIENT Mode

MrKaktus
Explorer
Hi,

I've previously successfuly implemented OpenGL Extended Desktop Client Rendering and now I've decided to add Direct Rendering mode support to that setup. I've googled a little bit the forum and I found this references and samples:

https://github.com/jherico/OculusRiftInAction
http://nuclear.mutantstargoat.com/hg/oculus2/file/tip
https://github.com/jimbo00000/RiftSkeleton
viewtopic.php?f=30&t=8948
viewtopic.php?t=17842
http://www.glfw.org/docs/latest/rift.html

The problem is that they all use SDK Rendering.

So I've analyzed RoomTinyDemo source code in OpenGL SDK Rendering mode to understand how it works and I'm hitting a point where I don't know what I'm doing wrong.

I'm working with latest SDK 0.5.0.1 and on NVidia drivers 344.75 on Windows 7 64bit.
From RoomTinyDemo we can see that in OpenGL SDK Rendering mode order of calls is as follows:

ovr_Initialize();
ovrHmd_Create();
Platform.InitWindowAndDevice:
- RegisterClass()
- CreateWindow()
- GetDC()
- wglChoosePixelFormatARB()
- SetPixelFormat()
- wglCreateContextAttribsARB()
- Get OpenGL funtion pointers, check extensions support
- ShowWindow()
- Few OpenGL API calls to init FBO etc.


ovrHmd_GetFovTextureSize()
SDK Rendering setup here:
ovrGLConfig config;
config.OGL.Header.API = ovrRenderAPI_OpenGL;
config.OGL.Header.BackBufferSize = HMD->Resolution;
config.OGL.Header.Multisample = 0;
config.OGL.Window = Platform.Window;
config.OGL.DC = Platform.hDC;
ovrHmd_ConfigureRendering(); // <- SDK Rendering only call
ovrHmd_SetEnabledCaps();
ovrHmd_AttachToWindow();
ovrHmd_ConfigureTracking();
ovrHmd_DismissHSWDisplay(); // <- SDK Rendering only call


I've debugged and replaced with my own all the calls in Platform.InitWindowAndDevice: and I'm seeing that the RoomTinyDemo is still correctly running in SDK Direct mode when creating window and OpenGL xontext using my code. But in my project, after window is created and tracking is enabled Oculus display stays black and the led is stil orange.
Tracking is working but swapchain is not transferring data to HMD even though calling ovrHmd_AttachToWindow() returns true.

My current calls order after creating Window with OpenGL rendering context is:


// Calculate resolution of shared Render Target
ovrSizei recommenedTex0Size = ovrHmd_GetFovTextureSize(hmd, ovrEye_Left, hmd->DefaultEyeFov[0], 1.0f);
ovrSizei recommenedTex1Size = ovrHmd_GetFovTextureSize(hmd, ovrEye_Right, hmd->DefaultEyeFov[1], 1.0f);
resolutionRT.x = static_cast<uint32>(recommenedTex0Size.w + recommenedTex1Size.w);
resolutionRT.y = static_cast<uint32>(max( recommenedTex0Size.h, recommenedTex1Size.h ));

// Configure SDK rendering THIS PART IS DISABLED OF COURSE
#if OCULUS_SDK_RENDERING
assert( GpuContext.screen.created );
configuration.OGL.Header.API = ovrRenderAPI_OpenGL;
configuration.OGL.Header.BackBufferSize = OVR::Sizei(hmd->Resolution.w, hmd->Resolution.h);
configuration.OGL.Header.Multisample = 1;
// configuration.OGL.DC = HDC; // TODO: Connect with window
// configuration.OGL.Window = HWND;
if (!ovrHmd_ConfigureRendering(hmd,
&configuration.Config,
ovrDistortionCap_Chromatic |
ovrDistortionCap_Vignette |
ovrDistortionCap_TimeWarp |
ovrDistortionCap_Overdrive,
hmd->DefaultEyeFov, eye))
return false;
// ovrHmd_SetBool(dev.hmd, "HSW", false); // Disable Health Safety Warning
#endif

ovrHmd_SetEnabledCaps(hmd, ovrHmdCap_LowPersistence | ovrHmdCap_DynamicPrediction );

// Attach to window in direct rendering mode
if (displayMode == Direct)
{
assert( GpuContext.screen.created );
#ifdef EN_PLATFORM_WINDOWS
if (!ovrHmd_AttachToWindow(hmd, GpuContext.device.hWnd, NULL, NULL))
Log << "ERROR: Cannot attach Oculus to window for Direct Rendering!\n";
#endif
}

// Turn on the Oculus
if (!ovrHmd_ConfigureTracking(hmd, ovrTrackingCap_Orientation | ovrTrackingCap_MagYawCorrection | ovrTrackingCap_Position, ovrTrackingCap_Orientation))
return false;

#if !OCULUS_SDK_RENDERING
EyeRenderDesc[0] = ovrHmd_GetRenderDesc(hmd, ovrEye_Left, hmd->DefaultEyeFov[0]);
EyeRenderDesc[1] = ovrHmd_GetRenderDesc(hmd, ovrEye_Right, hmd->DefaultEyeFov[1]);
#endif

ovrHmd_RecenterPose(hmd);


So the order of the calls is identical, i've also checked that all of them are performed and that parameters are the same.

Can anyone from Oculus confirm that OpenGL CLIENT Direct Mode rendering is working?
I don't want to waste time on trying to find proper setup/config and debug code if it is just not working in SDK now.
13 REPLIES 13

Vrally
Protege
"mrkaktus" wrote:

Can anyone from Oculus confirm that OpenGL CLIENT Direct Mode rendering is working?
I don't want to waste time on trying to find proper setup/config and debug code if it is just not working in SDK now.


I am not working at Oculus, but I do have a working client distortion render in direct mode using OpenSceneGraph (which is a OpenGL scenegraph). So it is certainly possible to use client distortion render and OpenGL.

MrKaktus
Explorer
"pixelminer" wrote:
"mrkaktus" wrote:

Can anyone from Oculus confirm that OpenGL CLIENT Direct Mode rendering is working?
I don't want to waste time on trying to find proper setup/config and debug code if it is just not working in SDK now.


I am not working at Oculus, but I do have a working client distortion render in direct mode using OpenSceneGraph (which is a OpenGL scenegraph). So it is certainly possible to use client distortion render and OpenGL.


That's great. Could you share your window creation and Oculus setup functions? So that I could compare them and find the difference that makes my code fail to attach?

Vrally
Protege
"mrkaktus" wrote:
"pixelminer" wrote:
"mrkaktus" wrote:

That's great. Could you share your window creation and Oculus setup functions? So that I could compare them and find the difference that makes my code fail to attach?


Sure, all my code is available at:
http://github.com/bjornblissing/osgoculusviewer

Probably the most interesting file (regarding initialization) would be:
http://github.com/bjornblissing/osgocul ... device.cpp

But note that the initialization details could be a bit hidden inside the abstraction levels of OpenSceneGraph.

MrKaktus
Explorer
I've analyzed your code together with OSG window creation code, and I'm seeing that you have different order of calls than in SDK Rendering implementation.

You call these functions even before Window is created, as opposed to Oculus SDK implementation (which calls them after window and OGL context creation):
ovrHmd_GetFovTextureSize();
ovrHmd_getRenderDesc();
ovrHmd_SetEnabledCaps();
ovrHmd_ConfigureTracking();

Then OSG creates window.

Then you call:
ovrHmd_AttachToWindow();

This leads to conclusion that at least for SDK 0.5.0.1 it doesn't matter if you init Oculus BEFORE or AFTER window is created. It looks like only order below is really important :
- Create Window
- Create OpenGL context
- AttachToWindow

So I've analyzed OSG window creation function process and I'm still not seeing any difference in window/context creation. In fact OSG uses older less sophisticated calls than Oculus SDK and my implementation and yet you say it works.
Hmmm..

On what configuration did you tested it? (Os, bits, driver, card?)

Vrally
Protege
"mrkaktus" wrote:

So I've analyzed OSG window creation function process and I'm still not seeing any difference in window/context creation. In fact OSG uses older less sophisticated calls than Oculus SDK and my implementation and yet you say it works.
Hmmm..

On what configuration did you tested it? (Os, bits, driver, card?)


It most certainly work. We have used it in a rather large research project. The system we ran it on used a Windows 7 64-bit OS, but the application was compiled as a 32-bit application. Nvidia GTX770 card with driver version 347.52. And OpenSceneGraph version 3.2.1.

The Github project have been cloned and forked by many people, so you could probably find someone who have tried running it on Windows 8 or later. But I have no guaranteed information that it works. (The project can also run under Mac and Linux, but then direct mode is not supported).

lamour42
Expert Protege
Maybe you should give some details about your rendering loop. Not just setting up.

If I remember correctly I also had a Rift LED staying orange when I had a problem with my ovrHmd_BeginFrameTiming() / ovrHmd_EndFrameTiming() pair. (But I am on DirectX, so it may behave differently.)

MrKaktus
Explorer
Hmm heres OSG swap buffer setup:

// Attach a callback to detect swap
osg::ref_ptr<OculusSwapCallback> swapCallback = new OculusSwapCallback(oculusDevice);
gc->setSwapCallback(swapCallback);


void OculusSwapCallback::swapBuffersImplementation(osg::GraphicsContext *gc) {
// Run the default system swapBufferImplementation
gc->swapBuffersImplementation();
// End frame timing when swap buffer is done
m_device->endFrameTiming();
// Start a new frame with incremented frame index
m_device->beginFrameTiming(++m_frameIndex);
}
void GraphicsWindowWin32::swapBuffersImplementation()
{
if (!_realized) return;
if (!::SwapBuffers(_hdc) && ::GetLastError() != 0)
{
reportErrorForScreen("GraphicsWindowWin32::swapBuffersImplementation() - Unable to swap display buffers", _traits->screenNum, ::GetLastError());
}
}

void OculusDevice::endFrameTiming() const {
ovrHmd_EndFrameTiming(m_hmdDevice);
}

void OculusDevice::beginFrameTiming(unsigned int frameIndex) {
m_frameTiming = ovrHmd_BeginFrameTiming(m_hmdDevice, frameIndex);
}


So it goes down to:

SwapBuffers(_hdc)
ovrHmd_EndFrameTiming(m_hmdDevice);
ovrHmd_BeginFrameTiming(m_hmdDevice, frameIndex);


Here's mine window creation and main loop:

// Detect Oculus
Ptr<HMD> hmd = nullptr;
if (Input.hmd.available())
{
hmd = Input.hmd.get();
HMDType type = hmd->device();
if ( type == HMDOculusDK1 ||
type == HMDOculusDKHD ||
type == HMDOculusDKCrystalCove ||
type == HMDOculusDK2 )
oculus = ptr_dynamic_cast<OculusX, HMD>(hmd);
}

// Create screen
screen.shadingLanguage = GLSL_4_40;
screen.samples = 1;
if (oculus)
{
if (oculus->mode() == Direct)
{
screen.mode = BorderlessWindow;
screen.display = -1;
}
else
{
screen.mode = debugMode ? Window : Fullscreen;
screen.display = debugMode ? -1 : oculus->display();
}
screen.width = oculus->resolution().width;
screen.height = oculus->resolution().height;
screen.hmd = ptr_dynamic_cast<HMD, OculusX>(oculus);
}
else
{
screen.mode = debugMode ? Window : Fullscreen;
screen.display = 0;
screen.width = debugMode ? 1920 : 0;
screen.height = debugMode ? 1080 : 0;
}

Gpu.screen.create(&screen, string("Sample: Physically Based Rendering"));
Gpu.output.mode(ColorSpaceLinear);
Gpu.vsync(true); // <- tried with and without VSync
screen.hmd = nullptr;

// Configure Oculus
stereoSwitch = false;
stereoReset = true;
stereo = new Stereoscopy(ptr_dynamic_cast<HMD, OculusX>(oculus)); // Stereoscopy wrapper turns HMD on for us
if (oculus)
{
// Create color rendertarget with size recommended by this Oculus
uint32v2 resolution = oculus->renderTarget();
TextureState texSettings;
texSettings.width = resolution.width;
texSettings.height = resolution.height;
texSettings.type = Texture2D;
texSettings.format = FormatRGBA_8; // <- this is just reverse notation of ABGR8
color = Gpu.texture.create(texSettings);

// If GPU is not supporting NPOT textures, resize rendertarget
if (!color)
{
if(!powerOfTwo(texSettings.width)) texSettings.width = nextPowerOfTwo(texSettings.width);
if(!powerOfTwo(texSettings.height)) texSettings.height = nextPowerOfTwo(texSettings.height);
color = Gpu.texture.create(texSettings);
}

// Create depth rendertarget
texSettings.format = FormatD_32;
depth = Gpu.texture.create(texSettings);

// Create Framebuffer
fbo = Gpu.output.buffer.create();
fbo->setColor(0, color);
fbo->setDepth(depth);

// Image rendered to Framebuffer will be used as source for distortion
stereo->source(color);
stereo->on();
}

// Performance counters
averageTime = 0.0;
samples = 0;

// Example
Example example;
StateManager.set(&example);

// Game loop
uint32 frameIndex = 0;
Timer timer;
Time dT;
timer.start();
for(;;)
{
if (stereo)
stereo->startFrame(frameIndex);

dT = timer.elapsed(); // Time of last frame in seconds
timer.start();

StateManager.update(dT);
if (!StateManager.draw())
break;

stereo->display();
frameIndex++;
}


Stereo wrapper:

Stereoscopy::Stereoscopy(Ptr<HMD> hmd) :
device(ptr_dynamic_cast<OculusX, HMD>(hmd)),
program(nullptr),
sampler(nullptr),
eyeToSourceUVScale(nullptr),
eyeToSourceUVOffset(nullptr),
eyeRotationStart(nullptr),
eyeRotationEnd(nullptr),
latencyProgram(nullptr),
latencyProjection(nullptr),
latencyColor(nullptr),
latencyBuffer(nullptr),
ready(false),
enable(false),
position(0.0f, 0.0f, 0.0f),
rotation(0.0f, 0.0f, 0.0f)
{
if (!device)
return;

device->on();

// Get recommended render target size for this Oculus
size = device->renderTarget();

// Source rendertarget consist image for both eyes side by side.
for(uint8 eye=0; eye<2; ++eye)
{
settings[eye].width = size.width;
settings[eye].height = size.height;
settings[eye].viewport.y = 0;
settings[eye].viewport.width = size.width / 2;
settings[eye].viewport.height = size.height;
}
settings[0].viewport.x = 0;
settings[1].viewport.x = size.width / 2;

model = device->distortionModel(settings);

// Check if shouldn't apply WA for Windows 8.1
string effectName("oculus3");
if ( ( System.name() == Windows8 ||
System.name() == Windows8_1 ) &&
( device->display() == Gpu.screen.display() ) )
effectName = string("oculus3win8wa");

Effect effect(eGLSL_1_10, effectName);
program = effect.program();
sampler = program.sampler("inTexture");
eyeToSourceUVScale = program.parameter("EyeToSourceUVScale");
eyeToSourceUVOffset = program.parameter("EyeToSourceUVOffset");
eyeRotationStart = program.parameter("EyeRotationStart");
eyeRotationEnd = program.parameter("EyeRotationEnd");

// Quad for latency tester
float scale = 0.04f;
float aspect = static_cast<float>(size.width) / static_cast<float>(size.height);
float latencyQuad[16] = { 1.0f-scale, 1.0f-(scale*aspect), -1.0f, 1.0f,
1.0f+scale, 1.0f-(scale*aspect), -1.0f, 1.0f,
1.0f-scale, 1.0f+(scale*aspect), -1.0f, 1.0f,
1.0f+scale, 1.0f+(scale*aspect), -1.0f, 1.0f };

Effect latencyEffect(eGLSL_1_10, "resources/engine/shaders/latency");
latencyProgram = latencyEffect.program();
latencyProjection = latencyProgram.parameter("enProjection");
latencyColor = latencyProgram.parameter("color");
latencyBuffer = Gpu.buffer.create(BufferSettings(VertexBuffer, 16, ColumnInfo(Float4, "inPosition")), &latencyQuad);

latencyProjection.set( scene::FrustumSettings(0.15f, 2.0f, float4(1.0f, 1.0f, 1.0f, 1.0f)).projection() );

// Texture source is not set yet!
}

Stereoscopy::~Stereoscopy()
{
if (device)
device->off();
}

void Stereoscopy::on(void)
{
enable = true;
}

void Stereoscopy::startFrame(const uint32 frameIndex)
{
if (!device)
return;

timerR.start();
if (enable)
device->startFrame(frameIndex);
}

void Stereoscopy::source(Texture src)
{
assert( device );

size.width = src.width();
size.height = src.height();

// Source rendertarget consist image for both eyes side by side.
for(uint8 eye=0; eye<2; ++eye)
{
settings[eye].width = size.width;
settings[eye].height = size.height;
settings[eye].viewport.y = 0;
settings[eye].viewport.width = size.width / 2;
settings[eye].viewport.height = size.height;
}
settings[0].viewport.x = 0;
settings[1].viewport.x = (size.width + 1) / 2;

device->distortionUVScaleOffset(settings);
sampler.set(src);
sampler.wraping(Clamp, Clamp);

texture = src; // Debug
ready = true;
}

bool Stereoscopy::display(void)
{
if (enable)
{
// Be sure that source texture is attached
if (!ready)
return false;

// Wait to be as close to VSync as possible (Time-Warp)
device->waitForSync();

// Reproject rendered scene taking into notice lenses distortion
Gpu.output.buffer.setDefault();
Gpu.screen.view(0, 0, Gpu.screen.width(), Gpu.screen.height());
Gpu.scissor.rect(0, 0, Gpu.screen.width(), Gpu.screen.height());

Gpu.color.buffer.clearValue(0.0f, 0.0f, 0.0f, 1.0f);
Gpu.depth.buffer.clearValue(1.0f);
Gpu.color.buffer.clear();
Gpu.depth.buffer.clear();

Gpu.depth.test.off();
Gpu.culling.off();
Gpu.scissor.off();
Gpu.output.blend.off();

// Render latency-tester square
float4 color = device->latencyTesterSquareColor();
if (color.w > 0.0f)
{
latencyColor.set( color );
latencyProgram.draw(latencyBuffer, TriangleStripes);
}

// Model already consists meshes for both eyes
for(uint8 i=0; i<model->mesh.size(); ++i)
{
float2 scale = float2(settings[i].UVScaleOffset[0].x, settings[i].UVScaleOffset[0].y);
float2 offset = float2(settings[i].UVScaleOffset[1].x, settings[i].UVScaleOffset[1].y);

// OpenGL invert framebuffer Y axis
scale.y = -scale.y;
offset.y = 1.0f - offset.y;

// OpenGL in Windows 8.1 WA - Flip in both Y and X axes to emulate rotation by 270 degrees
//scale.x = -scale.x;

eyeToSourceUVScale.set(scale);
eyeToSourceUVOffset.set(offset);

eyeRotationStart.set( device->eyeRotationStartMatrix((Eye)i) );
eyeRotationEnd.set( device->eyeRotationEndMatrix((Eye)i) );
sampler.set(texture);

program.draw(model->mesh[i].geometry.buffer,
model->mesh[i].elements.buffer,
model->mesh[i].elements.type);
}
}

Gpu.display(); // Present

if (enable)
{
if (device->mode() == Desktop)
Gpu.waitForIdle(); // Flush/Sync

device->endFrame(); // Time-Warp: Mark frame end time
}

return true;
}


void Display(void)
{
// Swap buffers
SwapBuffers(GpuContext.device.hDC);
}

void Interface::waitForIdle(void)
{
assert( GpuContext.screen.created );
Profile( glFlush() )
Profile( glFinish() )
}


HMD Context:


void Context::HMD::init(void)
{
#if defined(EN_PLATFORM_MACOS) || defined(EN_PLATFORM_WINDOWS)
#if OCULUS_VR
// Initialize
ovr_Initialize();

// Get all detected Oculus Head Mounted Displays
uint32 devices = ovrHmd_Detect();
for(uint32 i=0; i<devices; ++i)
device.push_back(Ptr<input::HMD>(new OculusDK2(i)));
#endif
#endif
}

void Context::HMD::destroy(void)
{
device.clear();
#if defined(EN_PLATFORM_MACOS) || defined(EN_PLATFORM_WINDOWS)
#if OCULUS_VR
ovr_Shutdown();
#endif
#endif
}


And device interface:


OculusDK2::OculusDK2(uint8 index) :
enabled(false),
deviceType(HMDUnknown),
displayId(-1),
displayMode(Desktop),
hmd(ovrHmd_Create(index))
{
assert( hmd );

// Determine Oculus type
if (hmd->Type == ovrHmd_DK1)
deviceType = HMDOculusDK1;
else
if (hmd->Type == ovrHmd_DKHD)
deviceType = HMDOculusDKHD;
else
if (hmd->Type == 5)
deviceType = HMDOculusDKCrystalCove;
else
if (hmd->Type == ovrHmd_DK2)
deviceType = HMDOculusDK2;
else
deviceType = HMDUnknown;

// Check if Oculus can work in Direct Rendering mode
if (!(hmd->HmdCaps & ovrHmdCap_ExtendDesktop))
displayMode = Direct;

// Get display number on desktop
if (displayMode == Desktop)
{
#ifdef EN_PLATFORM_WINDOWS
sint32 displayNumber = 0;
DISPLAY_DEVICE Device;
memset(&Device, 0, sizeof(Device));
Device.cb = sizeof(Device);
while(EnumDisplayDevices(NULL, displayNumber, &Device, EDD_GET_DEVICE_INTERFACE_NAME))
{
if (Device.StateFlags & DISPLAY_DEVICE_ATTACHED_TO_DESKTOP)
{
DEVMODE DispMode;
memset(&DispMode, 0, sizeof(DispMode));
DispMode.dmSize = sizeof(DispMode);
if ( !EnumDisplaySettings(Device.DeviceName, ENUM_REGISTRY_SETTINGS, &DispMode) )
break;

// Compare name and resolution to find matching Display number
string name = stringFromWchar(Device.DeviceName, 32);
Log << "Display " << displayNumber << " : " << name << endl;
if ( name.compare(0, 12, string(hmd->DisplayDeviceName)) &&
hmd->Resolution.w == DispMode.dmPelsWidth &&
hmd->Resolution.h == DispMode.dmPelsHeight )
{
Log << "Oculus found on " << displayNumber << " : " << name << endl;
displayId = displayNumber;
break;
}
}
displayNumber++;
}
#endif
}
};

bool OculusDK2::on(void)
{
if (enabled)
return true;

// Calculate resolution of shared Render Target
ovrSizei recommenedTex0Size = ovrHmd_GetFovTextureSize(hmd, ovrEye_Left, hmd->DefaultEyeFov[0], 1.0f);
ovrSizei recommenedTex1Size = ovrHmd_GetFovTextureSize(hmd, ovrEye_Right, hmd->DefaultEyeFov[1], 1.0f);
resolutionRT.x = static_cast<uint32>(recommenedTex0Size.w + recommenedTex1Size.w);
resolutionRT.y = static_cast<uint32>(max( recommenedTex0Size.h, recommenedTex1Size.h ));

ovrHmd_SetEnabledCaps(hmd, ovrHmdCap_LowPersistence | ovrHmdCap_DynamicPrediction );

// Attach to window in direct rendering mode
if (displayMode == Direct)
{
assert( GpuContext.screen.created );
#ifdef EN_PLATFORM_WINDOWS
if (!ovrHmd_AttachToWindow(hmd, GpuContext.device.hWnd, NULL, NULL))
Log << "ERROR: Cannot attach Oculus to window for Direct Rendering!\n";
#endif
}

// Turn on the Oculus
if (!ovrHmd_ConfigureTracking(hmd, ovrTrackingCap_Orientation | ovrTrackingCap_MagYawCorrection | ovrTrackingCap_Position, ovrTrackingCap_Orientation))
return false;


EyeRenderDesc[0] = ovrHmd_GetRenderDesc(hmd, ovrEye_Left, hmd->DefaultEyeFov[0]);
EyeRenderDesc[1] = ovrHmd_GetRenderDesc(hmd, ovrEye_Right, hmd->DefaultEyeFov[1]);

ovrHmd_RecenterPose(hmd);

enabled = true;
return true;
}

void OculusDK2::startFrame(uint32 frameIndex)
{
ovrVector3f hmdToEyeViewOffset[2] = { EyeRenderDesc[0].HmdToEyeViewOffset,
EyeRenderDesc[1].HmdToEyeViewOffset };

startTime = ovrHmd_BeginFrameTiming(hmd, frameIndex);

ovrHmd_GetEyePoses(hmd, 0, hmdToEyeViewOffset, &eyePose[0], NULL); // &hmdState - Samples are tracking head properly without obtaining this info, how??
}

void OculusDK2::waitForSync(void)
{
ovr_WaitTillTime(startTime.TimewarpPointSeconds);
}

void OculusDK2::endFrame(void)
{
ovrHmd_EndFrameTiming(hmd);
}



Swap chain update looks the same to me.

jherico
Adventurer
"mrkaktus" wrote:

This leads to conclusion that at least for SDK 0.5.0.1 it doesn't matter if you init Oculus BEFORE or AFTER window is created. It looks like only order below is really important :
- Create Window
- Create OpenGL context
- AttachToWindow


If the rift is in direct hmd mode you MUST call over_Initialize before the OpenGL context is created.

MrKaktus
Explorer
Yes I'm calling ovr_Initialize() and ovrhmd_Create() before:

void Context::HMD::init(void)
{
#if defined(EN_PLATFORM_MACOS) || defined(EN_PLATFORM_WINDOWS)
#if OCULUS_VR
// Initialize
ovr_Initialize();

// Get all detected Oculus Head Mounted Displays
uint32 devices = ovrHmd_Detect();
for(uint32 i=0; i<devices; ++i)
device.push_back(Ptr<input::HMD>(new OculusDK2(i)));
#endif
#endif
}



OculusDK2::OculusDK2(uint8 index) :
enabled(false),
deviceType(HMDUnknown),
displayId(-1),
displayMode(Desktop),
hmd(ovrHmd_Create(index))
{
. . .


but I'm saying about turning HMD tracking on:


bool OculusDK2::on(void)
{
if (enabled)
return true;

// Calculate resolution of shared Render Target
ovrSizei recommenedTex0Size = ovrHmd_GetFovTextureSize(hmd, ovrEye_Left, hmd->DefaultEyeFov[0], 1.0f);
ovrSizei recommenedTex1Size = ovrHmd_GetFovTextureSize(hmd, ovrEye_Right, hmd->DefaultEyeFov[1], 1.0f);
resolutionRT.x = static_cast<uint32>(recommenedTex0Size.w + recommenedTex1Size.w);
resolutionRT.y = static_cast<uint32>(max( recommenedTex0Size.h, recommenedTex1Size.h ));

ovrHmd_SetEnabledCaps(hmd, ovrHmdCap_LowPersistence | ovrHmdCap_DynamicPrediction );

// Attach to window in direct rendering mode
if (displayMode == Direct)
{
assert( GpuContext.screen.created );
#ifdef EN_PLATFORM_WINDOWS
if (!ovrHmd_AttachToWindow(hmd, GpuContext.device.hWnd, NULL, NULL))
Log << "ERROR: Cannot attach Oculus to window for Direct Rendering!\n";
#endif
}

// Turn on the Oculus
if (!ovrHmd_ConfigureTracking(hmd, ovrTrackingCap_Orientation | ovrTrackingCap_MagYawCorrection | ovrTrackingCap_Position, ovrTrackingCap_Orientation))
return false;


EyeRenderDesc[0] = ovrHmd_GetRenderDesc(hmd, ovrEye_Left, hmd->DefaultEyeFov[0]);
EyeRenderDesc[1] = ovrHmd_GetRenderDesc(hmd, ovrEye_Right, hmd->DefaultEyeFov[1]);

ovrHmd_RecenterPose(hmd);

enabled = true;
return true;
}


which now is turned on after hwnd/wdc/gl ctx are created during stereoscopy wrapper creation.