We currently develop and distribute an application that enables users to generate 3D images and videos from data captured with a microscope. Our existing application uses the now defunct and barely supported Nvidia 3D Vision technology via their SDK and 3D Vision glasses and IR emitter. We generate what is ostensibly a standard JPS file for still image viewing and also have to the ability to generate videos which we currently save as AVI files. For obvious reasons, we need to evolve away from that platform and integrate with something that is actively supported and will be in the future. We are considering Oculus Rift.
What would be the recommend development path (SDK, examples, etc.) to follow in order to integrate 3D still-image viewing into our existing application using Oculus Rift? Would Nvidia VRWorks Graphics and Single Pass Stereo also be advantageous?
Eventually, if the above still-image integration proves straightforward enough, we would like to also solve the video generation problem and could use any other advice that may apply. Finally, with powerful enough graphics processing, our system is actually able to produce real-time volumetric data that could be used to view 3D data interactively. We can even envision controlling the microscope and the data capture using a VR environment. I'm assuming doing that would require the likes of Unity3D or something similar. Is that a fair assumption?