cancel
Showing results for 
Search instead for 
Did you mean: 

Native integration

seeme
Honored Guest
Hi there,

I'm modifying the VRLib/App in order to use the time warp in my native engine.

I have a small wrapper that encapsulate a DirectRender instance and the timeWarp instance.

Once our renderer is initialized, I call DirectRender::InitForCurrentSurface() (from the same thread as our own ES3 renderer) and then use it in TimeWarp::Startup.

The application crashes in InitForCurrentSurface when calling EGL_SEC_GetSurfaceAddress. The pointers to the windowSurfaces and the EGL_SEC_GetSurfaceAddress seems allright.

My callstacks looks like that:
01-01 02:22:24.555: W/System.err(5351): org.sample: Segmentation fault in thread 5377
01-01 02:22:24.555: W/System.err(5351): at <SAMPLE>.android::Surface::hook_perform(ANativeWindow*, int, ...)(Native Method)
01-01 02:22:24.555: W/System.err(5351): at <SAMPLE>.EGL_SEC_GetSurfaceAddress(Native Method)
01-01 02:22:24.555: W/System.err(5351): at <SAMPLE>.OVR::DirectRender::InitForCurrentSurface()(Native Method)
01-01 02:22:24.555: W/System.err(5351): at dalvik.system.NativeStart.run(Native Method)


Any idea why hook_perform would segfault like this? Am I initializing the direct renderer too soon?

Thanks
11 REPLIES 11

johnc
Honored Guest
Did you call GL_FindExtensions() first?

The front buffer pointer stuff is going to be removed, it is a remnant of when we were trying to do async time warp using CPU cores, before we had context priority on the GPU. A quirk of the setup for that was that something had to be drawn to the buffer before a pointer could be given. The call to GL_Finish() uses eglCreateSyncKHR to avoid driver "optimizations" of glFinish(). If that extension hadn't been looked up yet, it would fail.

I'll avoid this dependency in the next update.

Look at UnityPlugin.cpp for an example of a valid setup order.

seeme
Honored Guest
Nevermind.. I'm just stupid..

Thanks for the hint.

I must admit that I'm still stuck on this...

Here is my use case:

- Multiplatform/multirenderer engine (win32/ios/android)
- The application can run without a renderer (separate module loaded on the fly)
- I have an Activity that forwards the android calls to methods registered with RegisterNativeMethods at the root of the engine
- The engine is built into a .so lib that encapsulate the engine and all the modules used for the project

I tried working with the timeWarp itself (see OP), using the App, and deriving the unity plugin.

What I understood of the architecture: I can use a unityPlugin-like class that is supposed to be driven from the Command function from Java.

My issue with that is that at launch, I still don't know whether or not there will be a render engine. That means that when OnCreate is called, the Command is not reachable.

I can't get my head around this... I will provide further information as needed.

Can you help me to find a way to use your work in the engine. All the other use cases I have seen so far are building around the template, not the other way around..

The ideal case would be to build moonlight as a lib, and provide it the eye textures/fov and other settings and let it handle the warping (I really don't need moonlight to allocate it's own sound system, network features and such).

Thanks.

cybereality
Grand Champion
I'm not sure I understand the issue. How can your app not have a renderer? Isn't that required?
AMD Ryzen 7 1800X | MSI X370 Titanium | G.Skill 16GB DDR4 3200 | EVGA SuperNOVA 1000 | Corsair Hydro H110i Gigabyte RX Vega 64 x2 | Samsung 960 Evo M.2 500GB | Seagate FireCuda SSHD 2TB | Phanteks ENTHOO EVOLV

seeme
Honored Guest
On other platform, we can run without a renderer or a window, this is not an issue.
On Android, the init command is issued in the OnCreate function but at this point, the renderer is not initialized yet.

To be honest, I don't think this is the main issue here (I can call this command programaticaly later). The main issue is that I don't see how to just take the optimizations you made and use them in our engine. The different use cases I saw took moonlight as a whole (pad, rendering, sound, ui...).

JohnCarmack
Explorer
The next SDK will have the platform factored somewhat better, so you can just make an EnterVrMode() call, then submit rendered eye buffers with an end frame call to hand them off to TimeWarp.

punto
Honored Guest
I'm on a similar situation, I'd love to be able to integrate with our engine without having it take over our Activity, event loop, etc.

seeme
Honored Guest
After further investigation, I think the safest bet is to adapt/hijack the UnityPlugin.

I build OVR with the provided script (as a bunch of static libs) and I call everything from my Oculus module on startup. I'm still investigating this and it seems to work (though I hope the amount of modifications I need to perform on the java side stays minimal).

seeme
Honored Guest
Okay, so what I got working so far.

- Build the OVRLib/plugin as a separate static lib (could build a dynamic lib, easyer for me to integrate in our module system)
- Create a simple interface based on UnityPlugin
- Added a few things to my Application.java to give it what it wanted (the SetSchedFifo function is just empty for now)
- OVR_SetDebugMode, OVR_SetInitVariables, OVR_GetSensorState, OVR_InitRenderThread() seems to work fine
- Added a simple function to pass the texture id to OVR

Then it gets tricky..

I call OVR_TimeWarpEvent( 1 ) after having set the texture ID. I then get these messages:

01-15 02:29:41.748: W/Adreno-ES20(25068): <core_glStartTilingQCOM:136>: GL_INVALID_OPERATION
01-15 02:29:41.748: W/GlUtils(25068): SetGlStateForWarp GL Error: GL_INVALID_OPERATION
01-15 02:29:41.748: W/TimeWarp(25068): No valid eyeTexture

Any hint on this?

JohnCarmack
Explorer
I just completed a set of changes that will make all this much easier for you in the next SDK.


// This must be called by a function called directly from a java thread,
// preferably at JNI_OnLoad(). It will fail if called from a pthread created
// in native code.
// http://developer.android.com/training/articles/perf-jni.html#faq_FindClass
//
// This calls ovr_Initialize() internally.
void ovr_OnLoad( JavaVM * JavaVm_ );

// Starts up TimeWarp, vsync tracking, sensor reading, clock locking, thread scheduling,
// and sets video options. The calling thread will be given SCHED_FIFO.
// Should be called when the app is both resumed and has a valid window surface.
// The application must have their preferred OpenGL ES context current so the correct
// version and config can be matched for the background TimeWarp thread.
// On return, the context will be current on an invisible pbuffer, because TimeWarp
// will own the window.
hmdInfo_t ovr_EnterVrMode( ovrModeParms parms );

// Returns to normal operation, but the window cannot return to double buffering yet.
// Should be called on pause.
void ovr_LeaveVrMode();

// Accepts a new pos + texture set that will be used for future warps.
// The parms are copied, and are not referenced after the function returns.
//
// The application GL context that rendered the eye images must be current,
// but drawing does not need to be completed. A sync object will be added
// to the current context so the background thread can know when it is ready to use.
//
// This will block until the textures from the previous
// WarpSwap have completed rendering, to allow one frame of overlap for maximum
// GPU utilization, but prevent multiple frames from piling up variable latency.
//
// This will block until at least one vsync has passed since the last
// call to WarpSwap to prevent applications with simple scenes from
// generating completely wasted frames.
//
// Calling from a thread other than the one that called ovr_EnterVrMode will be
// a fatal error.
//
void ovr_WarpSwap( const TimeWarpParms * parms );