cancel
Showing results for 
Search instead for 
Did you mean: 

Request for a simple 0.8 SDK graphics tutorial

Anonymous
Not applicable
Hi 😄

I've seen the OculusRoomTiny demo and this simplified version : https://github.com/mattnewport/OculusRoomReallyTiny

which are great but still a bit beyond me with all the directx stuff. I understand the initialization, getting headtracking data but not all the graphics stuff.

Would anyone be willing to make a tutorial that demonstrates the absolute minimum needed to output something visual to the rift i.e. a triangle or even just a window with a background colour and/or some text.

Doesn't need to do anything ie no rooms chairs tables or flying balls no being able to move around etc like in the examples above.

Just a very simple graphical output to the rift using directx.

It would be greatly appreciated if someone would kindly do this?
16 REPLIES 16

cybereality
Grand Champion
Really, the Oculus Room Tiny that comes with the SDK is just about as basic as it gets. Yes, a triangle would be simpler, but too simple and not practical for anything beyond that.

If you are not comfortable with working on the DirectX/OpenGL level then download Unity or Unreal.

brantlew
Adventurer
Decreasing the geometry would not reduce the TinyRoom sample in any substantial way. Almost all of the boiler-plate code to setup the rendering would still be necessary even if you are just displaying 1 triangle. The additional geometry of that scene is not a substantial amount of code.

There may be samples in the wild that utilize OpenGL 1, but I'm not sure.

mattnewport
Protege
I posted the "Oculus Room Really Tiny" cut down version of Oculus Room Tiny because I felt there was more code than necessary in the official sample but I don't think there's much more that can be stripped out beyond that. As brantlew says, a fairly siginficant fraction of the code is just required set-up to get a Direct3D 11 device initialized and displaying anything. The code could be simplified a bit by stripping out texturing but there's not a lot of code dedicated to the scene geometry specifically.

If you're struggling with the Direct3D elements I'd suggest looking for some introductory Direct3D 11 tutorials - very little of that code is specific to VR / Oculus. Once you grasp the basics of Direct3D 11 it should be easier to understand the additional code for interfacing with the Rift and displaying two eye views instead of a single view.

I can give you a quick rundown of what's happening in 'Really Tiny':


  • Code to line 80: including required headers and linking required libs, defining a quick termination VALIDATE macro to bail out on any Direct3D errors, defining a bunch of typedefs for COM smart pointers to manage lifetimes of D3D objects (the D3D API is based on COM).

  • Window struct: minimal code to handle creating a Win32 Window and handling essential Window messages. This is boilerplate Win32 code and not D3D specific.

  • DepthBuffer struct: minimal wrapper over a D3D texture and depth stencil view that represent a depth buffer / z buffer. D3D makes a distinction between resources (a block of memory holding graphics data like a texture or vertex buffer) and a view (a binding of that data to a specific part of the graphics pipeline).

  • DirectX11 struct: wraps D3D and DXGI objects required to render pretty much anything using D3D11: the device and device context, swap chain, back buffer, a vertex and pixel shader, input layout, sampler state and constant buffer. The only thing you could drop by going to a single triangle would be the sampler state (not needed if you're not sampling any textures).

  • createTexture function: creates a texture and shader resource view and fills it with the appropriate pattern for wall / ceiling / floor. This code would not be needed for a single triangle or untextured scene.

  • Vertex struct, TriangleSet struct and Model struct: handle creating and rendering the scene geometry which is basically a bunch of cuboids created by the AddBox function. The Model Render function would not be much simpler for a single triangle, all you could drop are the PSSetSamplers and PSSetShaderResources calls if you weren't using any textures.

  • Scene struct: just hard codes the position, size and color of the boxes that make up the scene and creates them in code.

  • Camera struct: generates an appropriate view matrix for a camera at the specified position and rotation.

  • OculusTexture struct: one of the bits of Oculus specific code, this handles creating the Oculus SDK swap texture sets which are used to communicate the contents of the eye buffers to the SDK. Again, we also need a D3D 'view' which allows us to render to the swap textures (a render target view in this case).

  • DirectX11 constructor: does all the DXGI and D3D initialization we need to render anything. Most of this is not Oculus specific and would be required even to render a single triangle. Any Direct3D 11 tutorial should cover this stuff as it is not really VR specific.

  • MainLoop function: does the actual per-frame updates and rendering. Again, much of this would still be required even for a singe triangle.


If you have any specific questions about parts of the code you don't understand post them here and I'll try and answer but I'd really suggest going and working through some basic non-VR Direct3D11 tutorials until you get to the point where you understand all the non-VR specific code (which is most of the Direct3D code).

Anonymous
Not applicable
Thanks for the replies 😄 .

Cybereality and Brantlew, I appreciate the TinyRoom demo is simple to you but it's not as simple as it gets.

Displaying a triangle or an empty window that does nothing would be simpler. And for someone who is learning, like me, it would allow me to concentrate on what the sdk does.

About unity/unreal, it doesn't really interest me, i'm interested in the oculus sdk.

mattnewport thanks for the explanation, I will go through it against the source code. You're absolutely right that i need to understand directx11. I'm relatively ok with the basics, i understand the process of creating the window, but lack directx11 knowledge. If I make progress but get really stuck I will ask you, thanks.

Mars3D
Honored Guest
Here's a really simple 0.8 SDK example using OpenGL. It just draws a triangle,

http://www.mars3d.com/OculusBasic.zip

cybereality
Grand Champion
"Zunfix" wrote:
Displaying a triangle or an empty window that does nothing would be simpler. And for someone who is learning, like me, it would allow me to concentrate on what the sdk does.

No, that doesn't make sense. Having the Oculus sample open an empty window doesn't tell you anything about initializing the Oculus SDK or rendering in stereo. And, as explained above, displaying a single triangle and displaying a number of cubes has nearly the same complexity (most of the code is necessary setup stuff that will happen whether you render a triangle or Gears of War).

If you need to understand how to open a window or draw a triangle, there are dozens of websites (including Micosoft's own docs) or many books showing the same thing. This is not VR specific, and it is expected you understand these simple concepts before trying to write a VR enabled 3D engine.

I'd recommend you buy a DirectX11 book to cover the basics first before attempting to do this. The Frank Luna books are some of the best in this regard: http://www.amazon.com/Introduction-3D-G ... 1936420228

Anonymous
Not applicable
Mars3D thx i will look at that, .....it's not dx though :mrgreen:

Cyber, i can define, register & create/show a window, initialize dx and render a triangle, all on my monitor. It's getting it on the rift that i struggle with from the tinyworld sample. But yes a firm grasp of dx, which i dont have, would help.

I'm not trying to make a game or anything useful, just trying to learn thats all.

Thanks for your input 😄 .

LKostyra
Protege
I would tend to agree with Cyber.

These demos are the least you have to do in order to make an image visible on your screen. Yes, it does exceed standard "Hello Triangle" D3D/OGL stuff (in OGL creation of Frame Buffer Objects for example), but it is necessary for Rift to work. I went through them recently all by myself, integrating my school project with the Rift, and this truly is the absolute minimum.

If it does help you, Oculus SDK functions and types begin with "ovr" prefix. Look after these, and most importantly, see how the rendering pipeline works without the Rift. Then check what is changed to make it work with the Rift.

galopin
Heroic Explorer
To display something on a Rift from a sample that is able to render a triangle is like 20 to 30 lines of code ( and i am generous), if you can't do 30 lines of code, you are not yet close to see the light at the end of the tunnel !

The basics to add rift to a dx11 application are very basics

As a initialisation stage, ovr_Initialize, ovr_Create, ovr_GetHmdDesc, and ovr_ConfigureTracking

As a core change, the IDXGISwapChain is replaced by a TextureSet created by ovr_CreateSwapTextureSetD3D11 and IDXGISwapChain::Present is replaced by ovr_SubmitFrame. Instead of creating render target view to render to the IDXGISwapChain buffers, you create the render target view from the buffer in the OVR swap texture set

You get the dimension for the texture set from ovr_GetFovTextureSize

Then everyframe, you start with ovr_GetPredictedDisplayTime and ovr_GetTrackingState to get the head position
And you end filling a ovrLayerEyeFov with the tracking you used from the start of the frame, the texture set, and call ovr_SubmitFrame.

Good luck, test every function return values as they can give you info about errors, i strongly suggest to add at ovr_Initialize a log callback for the oculus SDK to talk to you in case something happen too.