Forum Discussion
jherico
11 years agoAdventurer
Please refactor the C API to avoid breaking OpenGL
Right now there is a single C API for initializing the whole of the SDK... ovr_Initialize. Unfortunately this API does some somewhat hinky low level stuff with OpenGL and Direct3D drivers in order to support Direct HMD mode. This in turn is most likely what causes a lot of the incompatibility issues with things like multi-GPU card systems (in particular Optimus and DisplayLink).
Unfortunately, you must call this function in order to do so much as detect if there are any HMDs connected to the system, and generally speaking, if you want to support Direct HMD mode you have to call them before you've ever set up your first OpenGL context.
I would suggest an alternative. Break up the initialization function into distinct components based on function, specifically, between rendering support (output) and HMD detection and tracking (input). I should be able to call an ovr_InitilializeHmds function and thereafter do pretty much everything I can currently do with the sole exception of ovrHmd_AttachToWindow(). If I detect and HMD, and I detect that it's in direct mode, then I can call another method to initialize the rendering subsystem integration (obviously still prior to creating a context). You can even still retain ovr_Initialize as a single method that just calls both of the broken up methods for compatibility purposes.
This kind of separation should lower the barrier to entry for integrating Rift support into applications. A lot of applications might have performance or stability concerns that would preclude them using the SDK just because a user might have a Rift. If the impact of integrating the SDK to the extent of checking for the existence of a headset can be lowered in this way, OVR might get more uptake on integration.
Unfortunately, you must call this function in order to do so much as detect if there are any HMDs connected to the system, and generally speaking, if you want to support Direct HMD mode you have to call them before you've ever set up your first OpenGL context.
I would suggest an alternative. Break up the initialization function into distinct components based on function, specifically, between rendering support (output) and HMD detection and tracking (input). I should be able to call an ovr_InitilializeHmds function and thereafter do pretty much everything I can currently do with the sole exception of ovrHmd_AttachToWindow(). If I detect and HMD, and I detect that it's in direct mode, then I can call another method to initialize the rendering subsystem integration (obviously still prior to creating a context). You can even still retain ovr_Initialize as a single method that just calls both of the broken up methods for compatibility purposes.
This kind of separation should lower the barrier to entry for integrating Rift support into applications. A lot of applications might have performance or stability concerns that would preclude them using the SDK just because a user might have a Rift. If the impact of integrating the SDK to the extent of checking for the existence of a headset can be lowered in this way, OVR might get more uptake on integration.
5 Replies
- eskilHonored GuestI have suggested something like it here:
viewtopic.php?f=20&t=20358
The main problem is the injection between the graphics driver and the application. It creates all kinds of unpredictability and instability. It would be much better if the API explicitly asked the application to provide the resources needed rather then trying to be sneaky. As it is you can (if you are lucky) make a hack that uses OVR, but its almost impossible to implement support in to a modular design that wants to support multiple accessories and APIs. We are about to get DX12 Mantle and GLNext and that will make this approach totally unusable.
E - jhericoAdventurerYou say in that thread....
You also need a API that simply lets me give oculus pixels to be posted to the display, so that i can write an application that uses a CPU based rendering engine, that doesn't have anything to do with graphics drivers or even opening a window. ... It should look something like this:
ovrHmd_PostDisplayImage(data, size_x, size_y, OVR_RGB, OVR_UNSIGNED_CHAR);
This isn't really feasible if you want good performance. Regardless of whether you're working in Direct or Extended mode, the GPU is the device that's lighting up the individual pixels on the screen. If you have a function like the above you're clearly sending the data over the CPU/GPU bridge. Sending undistorted pixels across like that every single frame is going to eat up a huge chunk of the available bandwidth between the CPU and GPU for no good reason. Particularly since you don't explain how you got the 'data' in the first place. Presumably you rendered it via OpenGL or Direct3D, which means that it's on the GPU to start with, so in addition to copying the data from the CPU to the GPU, you have to have, somewhere else, copied it in the other direction. Furthermore, in order to apply timewarp, you need not only the image bits but the exact head pose from which they were rendered.
So if you think about it, what you're asking for is basically what the SDK already does in the case of extended mode. The only difference is that instead of passing a bunch of bits back and forth between the GPU and CPU, you're passing a single texture ID, which is a handle for that same set of bits already in GPU memory.
While I agree that Direct mode needs to be improved, and more to the point of this thread, decoupled from basic SDK initialization, it's still something that's really a critical path feature. It accomplishes a number of things that are all really important...- Hide the Rift from the desktop metaphor, so windows can't get lost on it
- Automatically ensure that the Rift is running at the appropriate refresh rate, orientation and with low persistence enabled.
- Reduce the multi-frame latency introduced by the conventional OpenGL pipeline. (in the regular pipeline, just because you've called SwapBuffers doesn't mean your frame is actually now on the display)
I'm hoping that GL Next and DX12 will include some extensions for better VR support that the SDK will be able to take advantage of without resorting to the fragile method injection mechanism that's in use right now, but for the time being we all need to figure out the best way to support a broad developer audience. - eskilHonored GuestNow obviously you do want a take-my-gl/dx-textur-and-draw-it-to-the-hmd funtion for the reason you outline. Now to do this you shouldent have to do all the nasty things OVR does.
Beside that i do want a here-take-my-pixels function too. Yes you might not be rendering on the GPU (Raytracing, video playback...) and in these cases you want to do your own warping. lots of hardware (intel, AMD, mobile) has unified memory architecture so there is no bus to shuffle over. With the state of the current state of the drivers it would also be a good fallback despite any performance issues.
The way i see it the OVR could open another window on the HMD, bind it to a new GL context and then use wglShareList to link it to the context of the client application, then when the client gives it one or two textures, switch to its own context and draw to its window. - lamour42Expert ProtegeHmm, isn't client rendering with direct mode exactly what you ask for? At least in DirectX mode I think it is. I would have guessed/hoped that OpenGL mode is similar in that regard.
- jhericoAdventurer
"eskil" wrote:
Now to do this you shouldent have to do all the nasty things OVR does.
But you do, because that's the only high performance way to get pixels onto a screen."eskil" wrote:
Beside that i do want a here-take-my-pixels function too. Yes you might not be rendering on the GPU (Raytracing, video playback...) and in these cases you want to do your own warping.
This is functionally equivalent to calling glTexImage2D with the bits and passing them to the SDK via ovrHmd_EndFrame()."eskil" wrote:
The way i see it the OVR could open another window on the HMD, bind it to a new GL context and then use wglShareList to link it to the context of the client application, then when the client gives it one or two textures, switch to its own context and draw to its window.
Opening a window on the Rift implies that it's part of the desktop metaphor, which you don't want. The basic problem is that neither OpenGL or Direct3D have a standard mechanism to blit a bunch of pixels to surface that isn't a window on a monitor that's part of the desktop. I'm hoping that will change in the future, but it requires buy-in from OS makers and video card makers, so it's going to be slow going.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 8 months ago
- 2 years ago