cancel
Showing results for 
Search instead for 
Did you mean: 

Thoughts on apps development with Mobile SDK

vislab
Honored Guest
For the last week or so I was trying to understand the idea behind sample applications for Mobile SDK. Unfortunately I have to admit that they are really messy and inconsistent at many points. I am saying about Native (Android) samples because I am interested in them.

I wanted to create simple application (to start with) where user can select 3D cube and move it around. VrCubeWorld projects seemed to be good starting point. Because I am was thinking about further fast development of some Java methods I've picked project with SurfaceView. Sadly, I was able to implement only one thing - moving one (predefined) cube with touchpad. I didn't implement Gaze cursor or anything else. There was one thing that amused me at first - one huge file written in C - why?!

Because of lack of explained examples and documentation I've decided to check CinemaSDK project as there are selection and gaze cursor implemented. But unexpectedly this project is totally different. Not only in the matter of language but also in application flow. In VrCubeWorld there is multi threading with messaging - quite nice idea and easy to understand. In Android OpenGL has it's own thread provided by OS, but I assumed that in native code it isn't provided so developer has to do it by himself. In CinemaSDK it looks like in fact apps must have 2 methods (it is also said in VrAppFramework App.h file):
  • Frame
  • DrawEyeView
    That looks reasonable until I wanted to draw this same effect I've saw in VrCubeWorld - spinning cube. Looking at CinemaSDK code and Oculus360PhotosSDK I've found poorly explained piece of code:
    for (;;) {
    const char * msg = MessageQueue.GetNextMessage();
    if (msg == NULL) {
    break;
    }
    Command(msg);
    free((void *) msg);
    }


    I mean - where/when you should update MessageQue? Or is it happening automatically? Those were my first questions. What was my surprise when I found out that commenting out Command(msg); in CinemaSDK doesn't change a thing. So what is the point of it? It turned out that commenting out next lines after this for loop (more precisly:
    CenterViewMatrix = ViewMgr.Frame( vrFrame );

    // update gui systems after the app frame, but before rendering anything
    GuiSys->Frame( vrFrame, CenterViewMatrix );
    )
    does change a thing.

    After a lot of searching on Google I've found video of presentation called: Getting Started with the Oculus Mobile SDK


    However, don't be fooled - you won't get from there how to start. But you wil hear about home menu, how optimize your app and what is ndk. What is even more I don't know - funny, that this presentation has been conducted by both authors of samples in Mobile SDK.

    So this way the circle is closing. From nothing to realizing that even authors have no idea how to start following famous quotation: "If you can't explain it simply, you don't understand it well enough." Albert Einstein.

    I've read a lot of posts on this forum and I have one (last) question regarding developing apps on Samsung Gear with Java and C++ - did someone manage to build OpenGL app? Because I am starting to think that this is only a myth and in fact everyone are building games using Unity.
  • 2 REPLIES 2

    mduffor
    Protege
    I started messing around with VRCubeWorld_NativeActivity a few days ago. It took a while to wrap my head around the build process, especially since the build process is designed to compile all of the sample apps, not just the one you are running the build.py script from. It took me a while to figure out how to copy the sample app directory to a different hierarchy and get it to build again, since it required changing relative paths in at least three different files, and adjusting the build script to account for not being in the same hierarchy as the ovr sdk.

    I'm guessing they put everything in one .c file to make it simpler to scan over the code. For production code, the various elements would be split out into separate files for better maintainability.

    Over the next few days I'll be starting to merge my own OpenGL ES code with reworked NativeActivity sample code to see if I can get my code working in GearVR. You aren't the only one working outside of Unity, but I'm betting we are in the minority. 😄

    Cheers,
    mduffor

    mduffor
    Protege
    I've had a chance to work with the VrCubeWorld_NativeActivity more. If you read the Mobile SDK docs on the Oculus site (https://developer.oculus.com/documentat ... dk/latest/) and use Google for looking up NDK specific bits, it is possible to figure out what the sample code is doing.

    This particular sample is in a single C file for easier searching and compiling. Although the sample is in C (C99), their approach to organizing the code is very OOP inspired. For my own use, I've gone through and broken the sample up into individual C++ classes spread across a file per class, and it mapped fairly directly. There were some minor differences in how android_app dealt with the Java object when compiled as CPP, but the edits were straightforward. I'm hoping to break some of these classes out, refactor them, and plug them into some of my own code if I find some time next week.

    By default the multithreaded flag is set to 0, so ovrRenderThread isn't even called. In this case ovrRenderer is called directly, and the logic is simplified a bit.

    The project is meant to be as bare-bones as possible, so only what is needed to implement spinning cubes with shaders on them is present. For a gaze cursor you are starting to get into casting a ray into the scene, performing hit detection, possibly displaying a cursor with a texture mapped poly which requires texture loading, etc. If this functionality were in the demo, it would make it more difficult to determine what is needed to get a bare bones program up and running versus features in the demo, so I can understand why there's not much extra functionality there. Even so, you could hook into ovrApp_HandleKeyEvent or ovrApp_HandleTouchEvent just like in other Android apps to handle touch or joystick input, and react to the keys in or around the ovrSimulation_Advance call to update the game world state.

    The code from Oculus360PhotosSDK you posted is using the MessageQueue class from the VrAppFramework in the SDK. This looks like it is just a central location for the different modules/threads to post strings, and then the strings are passed to the Command () method in the main thread to process them. In this case, you have your BackgroundGLLoadThread loading objects on one thread, and passing the "loaded pano" or "loaded cube" message to the queue. Then in your main thread when Frame () is called, based on the message queue it can swap the background textures since they are now loaded and update the GUI.

    The Oculus360Photos class inherits from the VrAppFramework class VrAppInterface, and I believe it just has to implement the various virtual functions to perform the app specific functionality (Configure, OneTimeInit, EnteredVrMode, OneTimeShutdown, OnKeyEvent, Frame, DrawEyeView, etc.) The framework is what calls these methods at the appropriate time in the app and update loop lifecycle.

    Hope this helps,
    mduffor