Forum Discussion
jherico
13 years agoAdventurer
Why is Oculus writing a (non-VR) general purpose library?
Let me preface this by saying that I love my Rift, and I think that Oculus VR has a big future ahead of it. But its becomeing clear to me that OVR is spending valuable developer time writing general purpose classes. These classes duplicate existing free functionality.
Some examples:
Basically everything in the LibOVR/Src/Kernel subdirectory is an example of this. I'm really concerned about it, because every hour spent on extending this is an hour not working on VR specific stuff, or features the community wants. Writing a string class or a linked list class is something you do in a job interview, not in an SDK, or any kind of production code. These are solved problems. Right now almost half of your total lines of code in the SDK are in the Kernel subdirectory, implying almost half your development time has been spent doing stuff no one outside of OVR cares about or is ever going to use.
I've honestly tried to hold my tongue about this, but the lastest SDK version shows that OVR is not only not slowing or reversing this trend, but appears to be doubling down with the new inclusion of a JSON parser, again written from scratch and again, increasing the internal dependence on other duplicated functionality, like the String class. How much earlier would the Linux port have been released if writing a JSON parser you'd simply written #include <boost/property_tree/json_parser.hpp> or instead of writing a string class you'd written #include <string>.
I really can't understand why OVR would be doing this unless it's developers are either unaware of Boost and the Standard Template Library, or have a serious case of not-invented-here syndrome. If it's the former, I suggest you do a little bit of reading up, but if, as I suspect, it's the latter, I suggest you get over it and start focusing on your core deliverables, not these diversions. I know it's actively impeding my integration of your software, and I suspect I'm not alone.
Some examples:
- OVR_Atomic.h - A set of classes for atomic operations against types
- OVR_String.h - A basic String class
- OVR_List.h - A basic linked list container class
- OVR_Array.h - A basic linked vector container class
Basically everything in the LibOVR/Src/Kernel subdirectory is an example of this. I'm really concerned about it, because every hour spent on extending this is an hour not working on VR specific stuff, or features the community wants. Writing a string class or a linked list class is something you do in a job interview, not in an SDK, or any kind of production code. These are solved problems. Right now almost half of your total lines of code in the SDK are in the Kernel subdirectory, implying almost half your development time has been spent doing stuff no one outside of OVR cares about or is ever going to use.
I've honestly tried to hold my tongue about this, but the lastest SDK version shows that OVR is not only not slowing or reversing this trend, but appears to be doubling down with the new inclusion of a JSON parser, again written from scratch and again, increasing the internal dependence on other duplicated functionality, like the String class. How much earlier would the Linux port have been released if writing a JSON parser you'd simply written #include <boost/property_tree/json_parser.hpp> or instead of writing a string class you'd written #include <string>.
I really can't understand why OVR would be doing this unless it's developers are either unaware of Boost and the Standard Template Library, or have a serious case of not-invented-here syndrome. If it's the former, I suggest you do a little bit of reading up, but if, as I suspect, it's the latter, I suggest you get over it and start focusing on your core deliverables, not these diversions. I know it's actively impeding my integration of your software, and I suspect I'm not alone.
30 Replies
- PokeyHonored GuestI don't know about Boost, but I know at my studio there is ban on using STL. I think it's because it is unreliable performance-wise? I don't know. But there could be good reasons for rolling your own fundamentals.
- geekmasterProtegeThere are a lot of controversies being discussed over using STL or Boost in high performance applications (such as low-latency gaming, in this case). People have value issues on both sides of the argument. Both Boost and STL are forbidden in some shops, purportedly for application performance issues. There are claims that such libraries can yield performance worse than using a managed language such as C#.
But certainly useful for rapid prototyping, as are other libraries, or other programming languages.
When rolling your own library, you have complete control of latency issues, although such issues can be avoided with judicious use of such libraries outside the latency critical path of your application.
Here is a relevant discussion of these issues:"here. Note he was asked about both STL and Boost in general as related to usage in game dev.
STL/Boost, does it belong into gamedev? If only parts of it, which ones?
You're asking about two different things here, right? STL and Boost, separately. But really, my answer is the same: There's nothing wrong with either one per se, but I discourage their use. Use of either encourages people to fit a solution to a problem rather than finding a solution to a problem. The solution should always be appropriate for the data at hand and the constraints of the hardware, etc. Both STL and Boost have an extremely narrow view of the "world" and their appropriate use is limited. Really, I discourage them because they lead programmers down the wrong direction right away, I often say if you feel like you need either one you probably don't really understand the problem that you're trying to solve.
But then, immediately following that referenced quote from Mike's blog, he has this to say:"
... which tends to agree with what the OP is saying here. So like many things in life, the answer to this question is messy...
Always remember that a programmer's job is not to write code, a programmer's job is to always to transform data from one form into another.
Oh, and don't get caught up in "shoulds" (the compiler "should" do this, the hardware "should" be like that, etc.) - jhericoAdventurer
"geekmaster" wrote:
There are claims that such libraries can yield performance worse than using a managed language such as C#."Pokey" wrote:
I don't know about Boost, but I know at my studio there is ban on using STL. I think it's because it is unreliable performance-wise? I don't know. But there could be good reasons for rolling your own fundamentals.
The idea that OVR might be avoiding Boost and STL over performance concerns really hadn't occured to me, and honestly I don't give it any weight. First off, the vast, VAST majority of the usage of these general purpose classes is in initialization and shutdown code. The only code in the SDK that would be involved in any sort of tight loop in game development is a tiny number of lines of code in the OVR_XXX_DeviceManager.cpp and OVR_XXX_HIDDevice.cpp files for each platform. So if you really need to optimize that to not use third party libraries, you can, and right now all the platforms except my Posix versions are using the relevant native calls anyway, and aren't tightly coupled with the OVR Kernel.
In fact, now that OVR has released an SDK with Linux support I should be able to benchmark the boost ASIO implementation directly against the OVR native calls implementation on both Win32 and Linux platforms."
Yeah, I really don't get how writing a linked list class or a JSON parser from the ground up is the 'right direction', or using existing code is the wrong direction. It's possible that most of the work the Mike has done has been on the graphics side where you do spend a lot of time building your own structures by hand and need fine grained control over exact memory locations and the like. It's also possible he simply has his head up his ass. Regardless, even if you take his statement at face falue, it doesn't seem to me to suggest that avoiding boost or STL simply so you can write your own class that does EXACTLY THE SAME THINGS is a good idea.
If you're going to write your own classes that duplicate functionality in widely used libraries, then you can't just say 'performance' and get away with it. The onus is on you to then actually show that a) performance was an issue in the use of the STL classes and b) your classes are demonstrably faster on all the target platforms. When you add that kind of effort on top of writing custom classes in the first place, most people end up thinking "well why don't we see how much we get just with the STL/Boost to start with" and then it turns out it's just fine. - jhericoAdventurer
"Pokey" wrote:
But there could be good reasons for rolling your own fundamentals.
Then I'd like to know exactly what they are, otherwise I'm left with the impression that I spent $150 for a headset and SDK and another $150 so some unknown number of devs could indulge themselves in C++ reinvent-the-wheel heaven. - geekmasterProtege
"jherico" wrote:
... If you're going to write your own classes that duplicate functionality in widely used libraries, then you can't just say 'performance' and get away with it. The onus is on you to then actually show that a) performance was an issue in the use of the STL classes and b) your classes are demonstrably faster on all the target platforms. When you add that kind of effort on top of writing custom classes in the first place, most people end up thinking "well why don't we see how much we get just with the STL/Boost to start with" and then it turns out it's just fine.
Agreed. I was a big fan of the Michael Abrash "Zen of Code Optimization" books, back in the day. His big thing was twofold:
1. You need to actually MEASURE your code to determine its timing characteristics.
2. You need to concentrate your optimization efforts on the loop kernels and other critical code paths.
... or at least that is how I remember it without Googling it... ;)
So, I agree that NIH syndrome can be a bad waste of programming resources in some cases, but I also agree that there may be very valid reasons to do it WHEN RELEVANT to actual measured critical timing issues in the code.
That said, I come from a background of writing tiny embedded code with few or no dependencies (programming on the bare metal). So I really like writing simple and tiny modules in C and WINAPI, and not much else, and then combining them into bigger tools. And my REAL joy is writing assembler language (especially on modern instruction sets). - FredzExplorer
"jherico" wrote:
Then I'd like to know exactly what they are, otherwise I'm left with the impression that I spent $150 for a headset and SDK and another $150 so some unknown number of devs could indulge themselves in C++ reinvent-the-wheel heaven.
Although I'm not a fan of the NIH syndrom, I guess there can be some valid reasons to want to minimize reliance on external libraries.
1) It gives them a code base they really know internally instead of relying on some "black box" functions with eventual unknown side effects (latency and others).
2) They don't have to follow the evolution of several external libraries and republish their code for each upstream modification, ie. they have complete control over code maintenance, which is quite fundamental when you own a software company.
3) They can quickly correct and publish the code if there is a problem with it instead of waiting for an upstream correction which may take time or has no guarantee of being implemented.
4) They'll also be able to relicense the whole code base under another license if they feel like in the future, like when the consumer Rift is a success and they've got a SDK they can sell to game studios.
Also I'm not sure the things you've listed may have taken that much time to implement. And they possibly already had an internal library of commonly used functions which could have been refined over time in their previous job. - densohaxExplorerIt's probably for optimization purposes for sure.
The notion of using allocators in lists, arrays, hashmaps etc.. This comes from EASTL, which is known to be faster than STL simply because of memory allocators..
As to why they do that.. I believe they want to be much more than a headset company, don't you think? They could be in the making of the universal VR library..
Also, have you seen those projects that use every possible libraries to do their work? These projects sucks, they are slow and ugly. Hacked together because their third-party libraries sucked balls to begin with and when the programmers encounter an issue with them, or simply a feature that don't do what it should, it's a shitty hacked job!
But I agree that here they simply could use STL since they are not actually making a cross-platform game, and there's nothing really that expensive computationally in there. But who are we to judge them? We don't know where they are going with this...
Oh and a lot of these classes are just ports from scaleform (search for the word scaleform in there) or some other projects they already have, no big deal. - jhericoAdventurer
"Fredz" wrote:
1) It gives them a code base they really know internally instead of relying on some "black box" functions with eventual unknown side effects (latency and others).
The STL and to an extend Boost aren't just any libraries. STL is a part of the C++ specification, including very specific guarantees regarding performance characteristics. Boost is basically the path into the STL, and the best of the boost libraries are merged into C++ with each new revision.
Also, the latency only really comes into play in a tiny fraction of the codebase, reading the sensor messages from the HID device and passing them into sensor fusion, the code for which is platform specific and not tightly coupled with the Kernel."Fredz" wrote:
2) They don't have to follow the evolution of several external libraries and republish their code for each upstream modification, ie. they have complete control over code maintenance, which is quite fundamental when you own a software company.
I left professional C++ development almost 10 years ago and basically came back to it for Rift development. The STL classes for strings, linked lists and vectors have not changed appreciably in that time. Boost on the other hand has become much more widespread, much more widely adopted and much more mature. These are not fast moving targets. Further, if there is a concern, then an easy solution is just to statically link against the version you're happy with and be done with it. Migration can happen when and if needed."Fredz" wrote:
3) They can quickly correct and publish the code if there is a problem with it instead of waiting for an upstream correction which may take time or has no guarantee of being implemented.
This presupposes bugs in the boost and stl libraries. I'd wager a guess that they have far more and better test coverage than the Kernel sources in the OVR SDK."Fredz" wrote:
4) They'll also be able to relicense the whole code base under another license if they feel like in the future, like when the consumer Rift is a success and they've got a SDK they can sell to game studios.
STL and Boost are both very generously licensed, along the lines of the BSD licensing, where you can basically take the software like a thief in the night to do with as you please, so that's really not a concern. Besides that, any game company developer worth his salt is going to have exactly the same concerns over why he would want to integrate a library that includes it's own string and list classes instead of using the existing implementations. I'd expect the same level of alarm as you'd get if they'd decided to try to write their samples using direct video driver access instead of DirectX/OpenGL. You don't build your own shitty compact car out of cardboard and barb wire when there's a fully functional sports car sitting right there next to it."Fredz" wrote:
Also I'm not sure the things you've listed may have taken that much time to implement. And they possibly already had an internal library of commonly used functions which could have been refined over time in their previous job.
That is quite possible, perhaps even likely considering all the current files have copyright headers that list a creation date on the exact same day of September 19th, 2012. However, this raises an even uglier specter of the possibility of a previous employer coming along and claiming ownership of that code, along with all derived code (like, the whole SDK). This still doesn't excuse or explain the decision to create a JSON parser from scratch. Again, not a new technology, not a lack of existing parsers with generous licenses and stable codebases to use. - densohaxExplorerjherico,
They probably took the same amount of time you spent writing your previous post to rename scaleform in all these files, and now use them because the guy who implements the SDK worked with them before on a project called scaleform, a project that stands solidly on it's legs (having worked on scaleform sources myself).
I'm actually paid each time I write scaleform in a forum post! :D - atavenerAdventurer
"jherico" wrote:
Then I'd like to know exactly what they are, otherwise I'm left with the impression that I spent $150 for a headset and SDK and another $150 so some unknown number of devs could indulge themselves in C++ reinvent-the-wheel heaven.
When I backed the Kickstarter, the impression I had was that it was for an HMD, not software. So when I saw the SDK (which is unappealing to me for many reasons), I was disappointed, but I certainly didn't feel gypped! The HMD is great!
To chime in on STL/Boost for gamedev: memory allocation. I've never worked on a game (or console) without making or using a custom allocator -- often several. Libraries or middleware which are made with their own resource management are problematic unless they are given their own pool of resources to manage, by the application -- and not assuming it's cool to create threads or malloc at will!
I'm also guessing that the code in the Kernel subdir pre-existed as part of someone's (Antonov?) common library to work from. We all have libraries similar to this, don't we?
However, I would have preferred the official SDK having a low-level layer with minimal dependencies, minimal code... just a hardware abstraction. Look at how small OpenHMD or libovr_nsb are. Having a "framework" on top of this to start from makes sense for demos or getting started from scratch. But as I said... don't we all have libraries to handle these higher level issues already? And generally coming along for the ride as part of our engine? Therefore... leave this high level as optional for making demos or those without an existing codebase.
Quick Links
- Horizon Developer Support
- Quest User Forums
- Troubleshooting Forum for problems with a game or app
- Quest Support for problems with your device
Other Meta Support
Related Content
- 5 months ago
- 11 months ago