Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
msat's avatar
msat
Honored Guest
13 years ago

Partially/fully pre-rendered ray tracing...

I'm sure you're all aware of the image quality that can be had from ray tracing. While pulling it off with impressive results in real-time is becoming more of a reality, it's still some ways away, and even then it's not quite on par with the quality we see in big budget movies today. But what if we could view pre-rendered scenes with limited 6DOF and in 3D? Well, that's the whole purpose of this post! :D

I have been interested in the concept of pre-rendered still and animated stereoscopic "panoramas" for the past several months now, and have been thinking about how it could be accomplished. Well, I figured I would share what has been on my mind. It's my understanding there are no implementations of panorama viewers that allow for both, 6DOF and stereoscopic viewing. As far as I know, the method I'm going to describe has not been done before, and while I unfortunately don't have the skills to implement an example myself, I hope someone might find these thoughts interesting and useful enough to give it a shot. :)

Off the top of my head, some of the applications where this could be useful is something as simple as sitting on an extremely detailed beach, to interactive media with limited or no real-time dynamic visuals such as certain types of adventure games, to being a "fly on the wall" in a Pixar movie. The primary benefit of this is the quality of visuals that can be achieved without having to do it in real-time. Well, at least not having to render the entire scene in real-time.

The most appropriate and descriptive name for the method that I can think of is 'light-field cube ray mapping'. Maybe that sounds like nonsense but bear with me for a moment. Lets say you wish to view a scene from the vantage point of a person sitting on a stool in the middle of a CG room. Now imagine enclosing that person's head in a virtual glass box that's big enough to allow for comfortable but limited head movement in all directions (rotation and position). This virtual box will form the basis of both the light-field cube camera during the pre-rendering of the scene, as well the area we will need to perform ray lookups during run-time.


There's no specific way pre-rendering the scene and capturing the data to the light-field cube needs to be performed, but I'll describe the way I had in mind. Each face of the cube contains a finite array of elements somewhat similar to pixels, but instead of recording just a single color value, it captures the angular data of the light rays entering it as well as their color information. Each element must likely be able to "capture" more than one ray (though in practice sometimes it may capture none). What you end up with is 6 faces of the cube (you don't necessarily have to do all 6 faces) with all the various light rays that entered it during pre-rendering mapped along the array of surface elements (ray maps). I just want to point out that one consequence of this approach is that the ray tracing engine for the pre-render phase would need to start from the light source, rather than the common method of starting from the camera and following it to the source in reverse. As you can probably imagine, capturing video would essentially create constantly changing ray maps.


In order to view the scene at run-time, we start with a typical ray tracer, where a ray extends outwards from the camera viewport, but it only intersects a single object - an element on the inside surface of the cube - and performs a lookup for a ray of the matching angle. The performance of this method will heavily depend on the efficiency of the lookup algorithm, but an optimized system should be substantially faster than a typical ray tracer for a given detail level. Of course, the drawback is that dynamically drawn elements are pretty much impossible unless you also incorporate aspects of traditional 3D engine which you could use for the purpose of rendering certain elements in real time.

You can take this concept a step further and fill an entire scene (or at least the areas the viewport can be) with the light-field cube cameras, and traverse from one light-field cube to the next in real time. This would also make rendering dynamic elements more feasible.

For content that wouldn't be affected by these limitations, the visual quality it could produce at a given performance level might be hard to achieve any other way.

25 Replies