Forum Discussion
mastasky
13 years agoExplorer
Code for applying distortion
Hi all,
my C++ and DirectX knowledge is very limited, so I have a hard time coming to grips with the rendering part of the SDK.
For now I'd like to create a simple stereo viewer. Load 2 images (left eye and right eye), distort them properly and combine them into the one image for the Rift.
The issue is: I dont know how to apply the distortion filter (do I need to use Direct3D or can I apply the filter to a 2D image?). Can anyone provide some (simple) sample code of loading a texture and distorting it for the rift? The SDK code is not simple and all embedded in actual scene rendering code. C# would be best (if possible). C++ could also work.
Cheers
my C++ and DirectX knowledge is very limited, so I have a hard time coming to grips with the rendering part of the SDK.
For now I'd like to create a simple stereo viewer. Load 2 images (left eye and right eye), distort them properly and combine them into the one image for the Rift.
The issue is: I dont know how to apply the distortion filter (do I need to use Direct3D or can I apply the filter to a 2D image?). Can anyone provide some (simple) sample code of loading a texture and distorting it for the rift? The SDK code is not simple and all embedded in actual scene rendering code. C# would be best (if possible). C++ could also work.
Cheers
21 Replies
- 38leinaDHonored GuestHi mastask,
You have some options here.
* You can use Direct3D. But learning the API might take you some time. After that, you should be able to modify the OculusTinyRoom Source code to do what you would like to do.
* You can use OpenGL. But it might be even hard for you as I assume you have no knowledge here either and there exists no complete example code for the Rift yet anyway.
* If you just want to render distorted/shifted still images and don't care for the sensors, plain image filtering might be the easiest to get the basics. Take the D3D fragment shader listed in the Oculus VR SDK document and put it into a double loop (looping x/y pixels of you image) and execute it for each pixel. You obviiously cannot "execute" the shader as is but have to create your own routine that does the things the shader is doing.
But if you want to do anything more, then I recommend learning the basics of D3D or OpenGL instead. It might take you some days to get where you want, but you will have it much easier to continue from there. And after explaining it in just 2 sentences: If you get approach 3 running, you likely have to have so much knowledge of transromations and D3D shaders, that you could have taken that route in the first place.
So, my recommendation: Try to tinker with the Oculus TIny Room demo, get you Direct3D brushed up and modify the demo your stereo image viewer.
Cheers,
Daniel - jwilkinsExplorerUsing the OpenGL math library (GLM), I rewrote the shader in C++
glm::vec2 hmd_lens_center;
glm::vec2 hmd_scale_out;
glm::vec2 hmd_scale_in;
glm::vec4 hmd_warp_param(1.00f, 0.22f, 0.24f, 0.00f); // these should come from the OVR SDK
// scales input texture coordinates for distortion.
inline glm::vec2 hmd_warp(const glm::vec2& in)
{
const glm::vec2 v = hmd_scale_in * (in - hmd_lens_center);
const glm::float_t rr = glm::dot(v, v);
const glm::vec4 r = glm::vec4(1, rr, rr*rr, rr*rr*rr);
const glm::vec2 out = hmd_scale_out * v * glm::dot(hmd_warp_param, r) + hmd_lens_center;
return out;
}
The rest is harder to follow since it relies on a software rendering library (Allegro).
glm::float_t hmd_letterbox = 0.05f; // top and bottom aren't so important so save some time by not copying them
glm::float_t hmd_zoom = 0.95f;
typedef unsigned short PIXEL_TYPE;
BITMAP* framebuffer; // side by side, unwarped image, OK for direct reading
BITMAP* warped; // memory bitmap, OK for direct writing
BITMAP* dest; // hardware bitmap, only write to with blit!
int blit_x, blit_y, blit_w, blit_h; // coords needed to blit 'warped' to 'dest'
void cview::blast_ovr( BITMAP *dest )
{
int half = warped->w / 2;
hmd_scale_out.y = hmd_zoom * warped->h;
hmd_scale_out.x = hmd_zoom * half;
hmd_scale_in.y = 1.0f / warped->h;
hmd_scale_in.x = 1.0f / half;
int top = warped->h * hmd_letterbox;
int bot = warped->h - top;
for (int y = top; y < bot; ++y) {
PIXEL_TYPE* dst;
hmd_lens_center.x = half / 2;
hmd_lens_center.y = warped->h / 2;
dst = (PIXEL_TYPE*)(warped->line[y]);
for (int x = 0; x < half; ++x,++dst) {
glm::vec2 tc = hmd_warp(glm::vec2(x, y));
if (tc.x < 0 || tc.x >= half || tc.y < 0 || tc.y >= warped->h)
continue;
}
}
hmd_lens_center.x = half / 2 + half;
hmd_lens_center.y = warped->h / 2;
dst = (PIXEL_TYPE*)(warped->line[y]) + half;
for (int x = half; x < warped->w; ++x, ++dst) {
glm::vec2 tc = hmd_warp(glm::vec2(x, y));
if (tc.x < half || tc.x >= warped->w || tc.y < 0 || tc.y >= warped->h)
continue;
*dst = *((PIXEL_TYPE*)(framebuffer->line)[(int)tc.y] + (int)tc.x);
}
}
blit(warped, dest, 0, 0, blit_x, blit_y, blit_w, blit_h);
} - mastaskyExplorerThanks both, great advice!
I started implementing the third option, for now simply taking the source image and transferring the pixels pixel by pixel into a target image of the same size. Even without running the distortion algorithm this is painfully slow. I tried to apply this to a webcam stream of 30fps, which slows it down to around 1fps. That's of course completely unusable for the Rift. So for still images it may work, for video it won't.
That means I'll have to start digging into Direct3D or OpenGL. Time to learn something new.
Thanks again, I shall share my results afterwards. In the meantime, if anyone has written this code or wants to write a tutorial about it, please do so. - jwilkinsExplorerThe original version of my code tried to use the libraries "putpixel" and "getpixel" in order to read and write data. These functions had pretty high overhead, especially the "putpixel" to the destination which ultimately is a DirectDraw surface. Locking/Unlocking a DirectDraw surface is really slow.
The version here reads and writes directly between two in memory bitmaps and then blits the result using a hardware accelerated blit call. It is much faster (I never even let the original ever run to completion it was too slow). However, it is still noticably slower on my older computer (AMD X2), although I don't notice it on a newer machine unless it is the debug version (Core 2).
However, I hope I can find a way to directly write some Direct3D texture memory and then apply the warp using a shader later.
BUT, since I plan on using my software renderer to test out some of Michael Abrash's "racing the beam" ideas, I cannot really take advantage of Direct3D (although I do intend to create a hardware accelerated mode for this renderer at some point). - sholeHonored GuestIf you know the source and target resolutions and do not need to resample the image, you could just map the source and target pixels with pointers without any repeated pixel-by-pixel maths required.
Aliasing would be horrible but it would be fast with zero calculation time.
Or with shaders this could be done with a warp map where a color corresponds to a source image pixel. - jwilkinsExplorerYou could do that, but deciding whether recalculating or precomputing is more efficient will require you to actually measure the time, not just speculate on such things. Remember that if you did that then instead of recomputing 1280x800 pointers you would now be pulling them from 4MB of memory which you will have to fit inside your cache.
Part of my programming language research is figuring out ways to simply switch between these different optimizations so that you can test them quickly without having to write 5 different versions of your code. - jwilkinsExplorerI wrote a new version of my code above to use OpenMP in order to parallelise the loop so that it runs on 4 cores. It blazes now and is no longer the bottleneck in my code.
- geekmasterProtege
"jwilkins" wrote:
... BUT, since I plan on using my software renderer to test out some of Michael Abrash's "racing the beam" ideas, I cannot really take advantage of Direct3D (although I do intend to create a hardware accelerated mode for this renderer at some point).
Does the Rift DK display use a "rolling display" so that you CAN implement those "racing the beam" ideas?
Keep in mind that Michael Abrash also said that for racing the beam, eye tracking may be more important than head tracking. - jwilkinsExplorerI was under the impression that LCDs scanned frames out in the same way as CRTs and that buffering and switching all at once was the exception, not the rule.
But thinking about it a little more I now realize that a lot of displays must do a lot of buffering, but I doubt Oculus chose one of these due to the latency. - geekmasterProtege
"jwilkins" wrote:
I was under the impression that LCDs scanned frames out in the same way as CRTs and that buffering and switching all at once was the exception, not the rule.
But thinking about it a little more I now realize that a lot of displays must do a lot of buffering, but I doubt Oculus chose one of these due to the latency.
The reason I question that is my experience shows that vertical lines remain vertical, even in the stroboscopic motion ghosts. To see them, you need to turn your head rapidly while looking at (and following with your eyes) a high-contrast vertical edge.
With a rolling display, those vertical lines should look slanted, because the pixel columns were at different locations as the display moved while the scanlines were being drawn. That is not what I remember having seen in my Rift DK... But to no for sure, I will have to try that little test again...
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 4 months ago
- 25 days ago