Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
geekmaster's avatar
geekmaster
Protege
13 years ago

Creating fisheye views with the Unity3D engine

I have been posting for some time here and at MTBS3D that we really need to stop doing the Rift pre-warp so late in the rendering pipeline (which adds more distortion artifacts of its own, that people have been complaining about).

It is my firm belief that the correct method of providing a fisheye view that compensates for the Rift lens pincushion distortion is by using a virtual fisheye lens on the virtual camera(s), as documented by Paul Bourke back in 2004 (with source code).

EDIT: The 2011 example from Paul Bourke (quoted below) may not be a good example of the fisheye lens modelling that I have suggested previously. His 2004 article remains appropriate for our needs, however.

The tragic news about the senseless death of Oculus co-founder Andrew Reisse led me to this post that he made on this forum:
"At

Creating fisheye views with the Unity3D engine
...
Here I discuss a means of producing a fisheye image using the Unity3D gaming engine. The approach has been introduced here for the spherical mirror. In that case a 180 degree fisheye is generated and subsequently warped. A 180 degree field of view can be achieved with a 4 pass approach, that is, 4 renders with camera frustums passing through the vertices of a cube with the view direction towards the midpoint of the edge between the left and right faces of the cube. In the following a wider field of view is created, namely up to a maximum of 250 degrees. It is based upon the same multipass render approach except now 5 cube faces are used, left-right-top-bottom-front, and the view direction is towards the centre of the front face. ...
A small sample Unity3D (Pro required) that illustrates a 210 and 240 degree fisheye is provided:
Unity_5cube2fish.zip.


Notice that each render pass yields a 90-degree FoV, when are then combined into a 180-to-250-degree FoV. For the Rift, we do not need this much. We do not need more than 110-degrees FoV under any circumstances, and they way most people seem to configure their Rifts, less than 90-degrees should be plenty. For 90-degree FoV, we only need a single-pass render, so we can get by with less processing power than a multi-pass rendering method would require.

That page liked above also says "A discussion of the general technique for a 180 degree fisheye and 4 cube faces can also be found here":
"At The Right Stuff" to me! :D

9 Replies

  • The real Right way to do it would be with raytracing, since then you can actually render a fisheye view without needing to apply distortions to render output (the current method people are using: render the scene into a texture then apply a distortion shader. Your proposed method: render the scene into four textures forming the sides of a cube map then apply a distortion shader).

    I did a similar thing back in 2009 in Ogre when I got a triple monitor eyefinity system running. I used multiple cameras rendering into a cubemap then using a shader to unwrap it into a 180 degree 5200 x 1050 cylindrical panorama. I used cylindrical instead of spherical because the vertical fov was so small on a 48:10 screen, but going to spherical would be easy.

    You could probably get rid of some of the performance hit of scene management in a multiple camera system by using geometry shaders to clone geometry onto multiple render targets rather than re-rendering the same geometry for each camera. I haven't played around with that yet.
  • "kojack" wrote:
    The real Right way to do it would be with raytracing, since then you can actually render a fisheye view without needing to apply distortions to render output (the current method people are using: render the scene into a texture then apply a distortion shader. Your proposed method: render the scene into four textures forming the sides of a cube map then apply a distortion shader).

    I did a similar thing back in 2009 in Ogre when I got a triple monitor eyefinity system running. I used multiple cameras rendering into a cubemap then using a shader to unwrap it into a 180 degree 5200 x 1050 cylindrical panorama. I used cylindrical instead of spherical because the vertical fov was so small on a 48:10 screen, but going to spherical would be easy.

    You could probably get rid of some of the performance hit of scene management in a multiple camera system by using geometry shaders to clone geometry onto multiple render targets rather than re-rendering the same geometry for each camera. I haven't played around with that yet.

    Actually, my preferred method IS raytracing (including one of those local projects mentioned above). And just like Paul Bourke's older 2004 article recommends (including the source code for POV-Ray fisheye camera lensing). You should check out the real-time Rift ray-tracer thread here. It uses an in-game fisheye lens.

    The only thing special about the method in THIS thread is that it supports Unity3D, without any hacks or mods or DLL-injection techniques. Unless I missed something (I only discovered Paul's newer post just before starting this thread). His proposed method should eliminate the pre-warp artifacts we now have. Or, maybe we need to just do it all from scratch, designing the games themselves to support in-game fisheye lenses (my REAL recommendation). Using existing Unity3D is just a way to get started on the road to pre-warp quality improvements.

    Paul's newer method WILL be an improvement, right?

    So yes, I agree with you (and I always did), that the REAL "right way" is ray tracing (or better, path tracing), but those do have some extra hardware requirements (which will be much more common before long). Too bad that multi-core processing tends to INCREASE latency, when that is exactly the direction our newer "faster" computers are heading. Perhaps we need to migrate more of our game-engines into the GPU shaders (like what can be seen at shadertoy.com).

    So, perhaps this PARTICULAR example of Paul's is not quite what we need, but I believe his older 2004 post really does show us the way. And even modern game engines SHOULD be able to render using a fisheye lens on their virtual cameras (the REAL intended point of this thread).
  • Pokey's avatar
    Pokey
    Honored Guest
    Perhaps slightly off topic, but what is the difference between ray tracing and path tracing?

    I've written a ray tracer, but I've never heard the term path tracing before.

    A quick google search made it appear that people use the terms interchangeably, though one website seemed to imply ray tracing was just path tracing with 0 bounces, meaning I guess I wrote a path tracer?
  • The artifacts in the cube map version should be pretty much the same kind of thing as in the current way we do it. It's still rendering to a rectangular region then being distorted, causing some texels to become larger than the final pixels. This will be in a different location (current method has enlarged texels in the centre, cube map method has enlarged texels near the left, right, top and bottom edges), meaning you need to render at a higher resolution. That makes the enlarged texels become roughly 1:1 with the final pixels, but the other texels are now rendered at a higher res than needed.

    Take a look at this pic from his page: (which I modified slightly, the red boxes are mine)

    I scaled it so this forum doesn't cut off the side, but in the original the fisheye output was 256x256. That's 65536 pixels.
    Each cube map face has 8x8 tiles. The red tiles are the largest tiles (well, the top and bottom edges have some larger ones, but they are cut off. This is just rough anyway). Roughly speaking, the big tile on the left and right are 21x23 pixels, so you need to render those faces at 168x184 to get 1:1 scale between texels and pixels. The top and bottom marked tiles are roughly 26x26, so you need 208x208 textures for them. That gives a total of 148352 pixels to render for the 4 faces (ignoring powers of two resolution).

    But the current technique of rendering the screen 25% larger per axis before distortion (from the sdk docs) means there's 1.56 times as many pixels. That would make it only 102400 for a 256x256 image (it would become 320x320).

    So the cubemap technique means you need to render even more texels to avoid the scaling artifacts, as well as rendering the scene 8 times instead of 2 (you need 4 cameras per eye).

    However there are advantages to it, it can handle wide fov better. For the rift's 110 degree fov it probably won't help much (could be interesting to try though), but if you wanted more than that you run into problems with normal single camera projection.

    (Note: it's 6:42am and I haven't slept yet, so math may not be perfect. Should be close though)
  • "Pokey" wrote:
    Perhaps slightly off topic, but what is the difference between ray tracing and path tracing?

    I've written a ray tracer, but I've never heard the term path tracing before.

    A quick google search made it appear that people use the terms interchangeably, though one website seemed to imply ray tracing was just path tracing with 0 bounces, meaning I guess I wrote a path tracer?

    Path tracing is a newer ADVANCED form of ray tracing. Google knows that I am interested in this stuff, so it gives me custom search results containing TONS of useful information about this. The "information bubble" effect is isolating you to only things that google knows you may be interested in (which apparently did not include path tracing, in the past).

    More about the "Filter Bubble":
    http://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.html


    More about Path Tracing:

    "
    "- the city scene has 750 instanced animated characters (30k triangles each, in total 22.5 million animated triangles), all of them physics driven with Bullet physics in a 600k triangle city.
    - the Piazza scene is fantastic to test color bleeding, there are 16384 instances of a 846k triangle city, 13.8 billion triangles in total, rendered in real-time.
    - interior scene from Octane Render, created by Enrico Cerica, 1 million triangles rendered in real-time.

    Real-time path traced virtual reality with Brigade:


    Notice that the video above is rendered in REAL-TIME, using computer equipment many of us already own.

    You really should train Google that you are interested in this sort of stuff by searching for it (often) and things related to it...

    This path-tracing stuff is the REAL future of VR. The current method of Rift pre-warp is just a temporary workaround to get us by for now, as long as we are willing to accept its limitations.

    So, the short answer to your question is no, path tracing is nothing like ray-tracing, other than being a complete replacement for it and for the newer radiosity rendering methods as well.
  • "kojack" wrote:
    The artifacts in the cube map version should be pretty much the same kind of thing as in the current way we do it. ... However there are advantages to it, it can handle wide fov better. For the rift's 110 degree fov it probably won't help much (could be interesting to try though), but if you wanted more than that you run into problems with normal single camera projection. ...

    I posted this thread right after I found Paul's new article, without studying it too deeply. I was excited to see newer stuff from Paul Bourke, and perhaps I "read too much into it" that I WANTED it to say. But his older 2004 stuff is definitely appropriate here. And for doing fisheye lenses on the in-game cameras in Unit3d, there is also the Omnity plug-in:
    http://www.youtube.com/watch?feature=player_detailpage&v=jzCoRoz7Pps#t=169s

    What my original intention was for this thread, is using a fisheye lens model on the in-game camera, not to post-warp an image for fisheye projection. So perhaps Paul's latest example was not appropriate for what I am suggesting that we use (in many threads).
  • Of course now that I've woken up, I should point out that the pic I used above was for 180 degree horizontal/vertical fov. At the rift's actual fov there is less visible, so the numbers will change (I haven't woken up enough to say if they will change for the better or worse. I need coffee). :)

    For some reason when I did this in the past, I never thought of using the cameras at 45 degrees from the user's view. I did 3 cameras as left, front, right (because of narrow vertical fov) or 5 cameras (when not on eyefinity). But then half of each side is wasted if you are doing 180 degrees or less.

    With cylindrical panoramas it was also pretty easy to do without shaders, just using lots of narrow strip cameras. 15 or more makes it look pretty good. Not good for performance though. :)

    I wouldn't mind getting my hands on a Xeon Phi card. That's intel's sequel to the dropped larrabee project, without the video output bit. It's 60 x86 cpu cores, each running at 1GHz and with 4 hyperthreads, on a pcie card with 8GB of ram. No gpu coding restrictions (such as no cpu stack), the cores are standard x86.
    On the downside, due to the lower clock speed, it only works out to 3.2x the floating point performance of a single Xeon cpu. That doesn't seem like enough of a gain for a card that came out at $2649. Could still be fun for raytracing though.
  • "geekmaster" wrote:
    I have been posting for some time here and at MTBS3D that we really need to stop doing the Rift pre-warp so late in the rendering pipeline (which adds more distortion artifacts of its own, that people have been complaining about).


    It does mention in the Unity Integration Document that the next release will have support of anti-aliased render buffers.. Apparently as of now there's no support for anti-aliasing on render-targets.

    AG