When I last looked into how the SDK rendering works it was something like this, IIRC:
The application two normal views of the scene, according to viewport parameters supplied by the SDK. The resulting framebuffers and depth buffers are then transformed by the SDK using a distortion mesh, it is supposed to not add latency, although I don't really understand how. But that's not the crux of the issue.
Back then it was pretty clear that the ridiculous performance penalty was due to the fact that we are scaling and distorting the renderbuffer after rendering and so losing alot of information. The good old viewport frustum is a totally wrong it was never supposed to be used to render high FOV scenes and basically just a step up from the orthographic projection. Has anybody thought about a solution? Several frustums per eye perhaps? There was something like 3 megapixel images to be projected at a 2 megapixel screen, has that changed? A smaller problem is that if we are rendering 24bit colors and only utilizing 16bit of it because of the pentile screen, but that's not as much as an issue isn't it?
Next is OpenGL support: It was abysmal compared to DirectX, does it still lag behind that much? Some tips on some micro frameworks or libraries for C++ without unity or unreal engine?
We are using the SDK with a custom modern OpenGL engine on nvidia hardware with no problem since the 0.8 SDK and now 1.3.
The distortion pass the SDK provides of course will add some degree of latency because it has to do some work! The performance of Oculus' software in this area has steadily improved, especially with the help of newer GPU drivers from nvidia/AMD, and I expect it to continue to do so.
There are always benefits from working in higher bit colour ranges than you can display, as HDR rendering has demonstrated for example.
If the majority of your programs performance is being spent in the distortion process then your program is probably not doing very much of anything to be honest. However there are things like GPU hardware support for asynchronous shaders (currently only AMD), that can in theory and from reports in practice improve the performance in this area that you may be interested in given your current line of thinking.
For the rest of what you say, I would advise you to read up on a lot of freely available papers online about VR rendering optimisations to gain a better understanding. There are many good ones from Valve, nVidia and AMD from this and last year, that should make it clear the how and why things work the way they do, along with newer and upcoming performance tricks.
Writing an initial straightforward win32, modern OpenGL application that uses the Oculus SDK shouldn't take more than a week, as the SDK samples are good. But I have seen a few people doing fine with SDL and GLFW but I haven't had the need to use them myself.
As a small note, i don't believe the timewarp use a distortion mesh, just a a regular compute shader post processing because it executes on a compute queue on nVidia ( as shown here ( https://forums.oculus.com/community/discussion/comment/354650#Comment_354650 ). It is easier to implement anyway, and they can resolve all the layer in one go quite easy.
It triggers at a nice 75Hz on my computer even when i force the app to use as much as 40ms to render a frame.
OpenGL is dead for intensive application, on windows, future is DX12, on other OS, it is Vulkan. DX11 and opengl are for low class programmer, students, and smaller applications where it is not necessary to spend time and money on the newer API.
As someone who is going the Vulkan rather than DX12 route...
Be careful about jumping into Vulkan right now if you just want to get things up and running with good performance. Unfortunately it isn't quite ready for the prime time yet and OpenGL4.4+ is a good interim solution while moving towards it. nVidia's Vulkan/OpenGL combo mix is quite useful in the short term for developers (not shipping applications), and currently we are beating Vulkan performance with this and custom nvidia extensions, but we expect that to change as our code, the drivers, and vulkan shader compilers get better.
Also right now Oculus doesn't officially support Vulkan yet (I hope so soon!), so on nvidia hardware for development your best bet is an interim Vulkan/OpenGL mix.
Personally I hope future students learn Vulkan instead of OpenGL and DX12! I think calling OpenGL and DX11 users low class programmers is a bit harsh if amusing, webgl users fair enough 😉
It could be interesting if nvidia/AMD add specific hardware blocks for VR distortion in the future - I suspect this couldn't happen until two generations from the upcoming ones though. Especially as from current tests of running a fair amount of your own games compute needs alongside VR poses some difficult compromises.
To be clear the performance issue was not the distortion process itself but the requirement of rendering about ~3 times many pixel that would be necessary for a 2D 1080p screen.
What are the specific advantages of using Vulkan for VR? Is there a high level wrapper library ala SFML for it out there? I see that the new version of GLFW supports it, it's not as batteries included as I would hope but unless there are any other alternatives for me (I'm into procedurally generated content) I'll try it out.
"Has anybody thought about a solution? Several frustums per eye perhaps?" Sounds like you are looking for Multi-Res Shading (https://developer.nvidia.com/vrworks).
The advantage of vulkan/dx12 for VR is massive. It has been a decade games are in a large part badly CPU bound by dx and driver, rendering 2 views make it double worse. The new API is incredibly ligther for the CPU and provide new features to be smarter on how we send frames 🙂 plus the explicit multigpu that is handy for VR.
On the GPU, we may save a little because we ditch a lot of automatic safeguard and because we may go more fine grain thanks to the new flexibility ( see recent frostbite talk or assassin creed 4 one, everyone is moving to gpu culling and ditch umbra 😛 )