Thanks MilesMiles1981 wrote:I pouls definitely use an int to track time, and not a float. Float are not precise on all their range, so it is not possible to accurately track time. Hence int.
For numerical data like sound waves, or other types of signal, this requirement for accuracy is not as stringent, so floats are better than int/fixed-point arithmetics.
The following approximation reasoning could be faulty. If there's something wrong with it, please explain--
A common sequencer Tick Per Quarter Note value is 480 PPQN. Because it is tempo-based, it offers better time resolution at faster tempos and poorer time resolution at slower tempos. This would also apply to a float64 timestamp of course. Faster tempos would have better time resolution even using float64 tempo-based Ticks.
It so happens that 480 PPQN at 125 Beats Per Minute tempo offers 1 millisecond resolution.
So how long could a float64 timestamp at 1 Tick Per Quarter Note run before it fails to deliver at least 1 ms resolution at 125 BPM?
At 4/4 time signature, if we tell the first bar to start at a float64 timestamp of 0.0, then second bar would start at timestamp of 4.0, the third bar at timestamp of 8.0, etc.
125 Beats Per Minute / 60 seconds = 2.0833... beats per second.
125 Beats Per Minute / 60000 milliseconds = 0.0020833... beats per millisecond
https://en.wikipedia.org/wiki/Double-pr ... int_format
OK, if we can accurately represent up to 15 decimal digits, then the biggest time stamps with "about 1 ms resolution" at 125 BPM might be on the order of 99999999.0000000, 99999999.0020833, 99999999.0041667, etc. About 8 decimal significant digits for the rounded Quarter Note count.The IEEE 754 standard specifies a binary64 as having:
Sign bit: 1 bit
Exponent: 11 bits
Significand precision: 53 bits (52 explicitly stored)
This gives 15–17 significant decimal digits precision. If a decimal string with at most 15 significant digits is converted to IEEE 754 double precision representation and then converted back to a string with the same number of significant digits, then the final string should match the original. If an IEEE 754 double precision is converted to a decimal string with at least 17 significant digits and then converted back to double, then the final number must match the original.
Of course smaller Quarter Note Tick counts would free up more significant digits for better time resolution. At 99999999 Beats, maybe there is a little "fuzz" trying to EXACTLY describe millisecond boundaries. OTOH there should be about 20833 representable counts per millisecond even if we can't EXACTLY describe each millisecond boundary! At 99999999 Beats, we ought to have about 2.08333... microsecond resolution! Even at 99999999 Beats, at a samplerate of 44.1 k-- [ 44.1 samples per millisecond / 20833 counts per millisecond = 0.0021168 ] There would be float64 Tick timing resolution of about 0.002117 sample? That seems "pretty good".
So 99999999 quarter notes / 125 Beats Per Minute = about 799999 minutes, or 13333 hours, or 555.56 days of "at least 1 ms resolution".
Sure, after a month or two of continuous playback maybe there could be a little "fuzz" between individual millisecond boundaries. But it ought to be generally much better time resolution than integer 480 PPQN for ordinary-length songs. Or is there some serious flaw in the reasoning?