Forum Discussion

🚨 This forum is archived and read-only. To submit a forum post, please visit our new Developer Forum. 🚨
wbsk's avatar
wbsk
Protege
10 years ago

SDK Timing functions returning doubles rather than uint64_t

It seems odd to me that in this day and age all timing related functions are not using unsigned 64bit integers - especially given the nature of VR where such precision is surely a good thing.

I suspect internally that close to the hardware layer it doesn't timestamp in doubles.  It's not a huge problem but again it just seems a little strange?

10 Replies

  • Sure it's not that I expect double's not to work ok as of course they will - so as I said it's not a huge problem or anything.  It just seems an odd choice in the 64bit age.

    You are also assuming their time stamps come from QPC which they may not.

    But anyway if you use a 64bit integer and a sensible unit of time you don't have any problems - for a million years or so - with varying timer resolutions and it would be more precise.
  • double is also 64 bit, and if integer should be enough for millions of years, then the little integral remaining 52bit of double should be enough also
  • It has the advantage of being able to represent a high accuracy while at the same time being an SI unit (seconds) and therefore more intuitive and easier to use than milli or micro seconds.  However, it has the disadvantage of losing accuracy over time as you move away from the epoch moment.  Having 52 bits of precision translates into about 15 digits of precision in base 10.  So after 1 second you still can represent pico-second accuracy.  However, after 1,000 seconds, you can only represent nano-second accuracy, and after 1,000,000 seconds you can only represent microsecond accuracy.  

    So as long as the epoch is program start, you can safely 11 days before you drop down to only microsecond accuracy.  Since the Oculus service is long running, internally they almost certainly have some sort of 64 bit integer time counter that they use to provide the timestamp to clients.  

    It might be nice if they exposed that too, but it might be considered too much of an implementation detail.  To expose it they'd have to document the exact units and it might be that they're using something like the Win32 QueryPerformanceCounter function, where the documentation explicitly tells you that there is no unit, just a count and you have to use it in combination with other time functions to determine what the actual duration of a single 'tick' is.  

  • galopin said:

    Basically, we know have a tool to match gpu and cpu timestamp for fine grain timing needs, because oculus is double based, we are kind of screw.



    Really?  Why can't you just convert the Oculus double into a uint64_t count of microseconds with (uint64_t)floor(ovr_timestamp * 1e6)
  • galopin's avatar
    galopin
    Heroic Explorer

    jherico said:


    galopin said:

    Basically, we know have a tool to match gpu and cpu timestamp for fine grain timing needs, because oculus is double based, we are kind of screw.



    Really?  Why can't you just convert the Oculus double into a uint64_t count of microseconds with (uint64_t)floor(ovr_timestamp * 1e6)


    because first, going back and forth between integer values and floating point introduce rounding errors and second, the cpu and gpu timestamp are not on the same scale either, you query the frequency for the cpu and the gpu with two different api ( perf counter and adapter ).

    What we would need, is a GetCalibrationClock( oculus, cpu, gpu ) that grant us a proper starting point for everyone, if oculus was using the perf counter, we would not need that, but because the oculus double has a reference point that is unknown, we can only approximate by sampling the value after GetCalibrationClock, and as we are not in a realtime OS, who knows the delta we have from the reading and the ground truth.
  • Great someone who groks the implications more :-)

    Looking at this in more detail, I was off when I suggested millions of years as it's been a while since I looked at this specifically.  For our games we use microseconds or nanoseconds everywhere with uint64_t for issues of networking/compression/determinism etc.

    Given a uint64_t and it's maximum value of 18 446 744 073 709 551 615 then you have:

    nanoseconds, 10^-9 seconds, maximum duration of ~584.9 years
    microseconds, 10^-6 seconds, maximum duration of ~584 942 years
    milliseconds, 10^-3 seconds, maximum duration of ~584 942 417 years

    Given that from my current understanding that the polling rate of the Rift is 1 millisecond (and I believe that's the fastest USB and gaming mouse speed?).  Then it would make sense to work in units that are at least one order better then that.

    So the sensible default for any game timer these days is either a uint64_t storing microseconds or nanoseconds.  I believe modern on board QPC timers are supposed to be nanosecond resolution now (And ignoring the obsolete faulty hardware with QPC issues as we should do now).

    If you are dealing with derivatives of values etc then millisecond resolution is not enough and there are likely lots of awkward future complications that could arise with future VR input and tracking methods and synchronising behaviours between them.  Also when we get more involved with integrating this all properly with GPU timings and scheduling such as Galopin mentions there is value in 'doing this right from the start'.

    Historically if you look at every input API that has said 'this is good enough' in the past it never actually has been.  And we have always benefited from lower level more precise sample access up until now.

    Some other considerations are for the performance implications for SIMD filtering sample values, memory size of actual sample data, the rate and buffered number of samples future VR input hardware might produce etc.

    I suspect sub millisecond polling rates will eventually become desirable for VR but polling rates lower then 1 microsecond would probably never be needed - but I don't know enough about that side of things.

    So I would suggest there is value in fixing this now and providing epoch synchronisation and probably at least to provide nanosecond and microsecond uint64_t API options - just to be sure.

    With a QPC style GetTimestampFrequency() kind of thing really being the ultimate as galopin suggested.

  • wbsk said:


    Given that from my current understanding that the polling rate of the Rift is 1 millisecond (and I believe that's the fastest USB and gaming mouse speed?).  Then it would make sense to work in units that are at least one order better then that.



    So from @jherico comment above, using double is enough to provide microsecond accuracy for about one megasecond (11 days), that's three orders of magnitude above the sampling rate.




    wbsk said:



    If you are dealing with derivatives of values etc then millisecond resolution is not enough and there are likely lots of awkward future complications that could arise with future VR input and tracking methods and synchronising behaviours between them.
    They are already dealing with derivatives. The entire prediction mechanics uses velocity and acceleration of the head pose and works fairly well (as long as you don't try to predict too far in the future, like 100ms).
  • One other nice feature of using uint64 instead of double is that it's easy to perform a precise time rebasing. What I mean by that you can essentially "recenter" your time relative to the beginning of the current game level by taking the game level start time as a uint64 and subtracting that value from all subsequent uint64 timestamps to put those timestamps into the space of "time relative to level start". This is afforded without any loss of precision (whereas if you did this with a double, you would be losing precision due to both double values already being most precise around the QPC epoch time, usually the last time you restarted your computer) with uint64's and thus such a "constant rebasing" of time will not drift, becoming less and less precise the longer the user has had their computer running for.

    This is all assuming that they use QPC internally. In the case that they use a different hardware timestamp counter, then the epoch would be the origin of that instead of the computer's last-rebooted time.