Parameter input range

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

earlevel wrote: Fri Apr 05, 2024 2:08 am
tl;dr: The important point being that the worst-case step size is 24-bit accurate, not 23.

I hope I explained that clearly. Does my explanation make sense to you?
It makes sense to me.

Another thing to focus on when this 24-bit range could present issues is how well-defined the top of the parameter range limit is. If we have linear steps (in float) of (1 / 2^24), then lower exponent bitfield values will generate "gaps". And until we reach the "bottom", is it 23 or 24 exponent values away from the top of the range? Once that bottom is reached, the linear step resolution must change for worse, unless that step size was already larger to accomodate a better linear range of values.

Post

S0lo wrote: Fri Apr 05, 2024 2:47 am
earlevel wrote: Fri Apr 05, 2024 2:08 am tl;dr: The important point being that the worst-case step size is 24-bit accurate, not 23.

I hope I explained that clearly. Does my explanation make sense to you?
Yes nice. But your first objection wasn't on step size. It was about that I seamed to claim that the mantissa is 23 bits. I claim I didn't.
Well, we weren't talking about step size, therefore it wasn't a point to object to—I brought up step size in my explanation. I'm not trying to take you to task, just explaining my comments. :wink:

I understand now that you thought that one bit of the 24 was not useful in 0..1 case, therefore said 23 bits of mantissa. But that wasn't clear till you said it explicitly, you hadn't previously acknowledged 24 bits, and your original explanation of float32 only mention 23 bits of mantissa and a sign.

Anyway, to recap it's 24-bit-accurate at the worst case, gaining a bit at every halving, compared to integer.
My audio DSP blog: earlevel.com

Post

camsr wrote: Fri Apr 05, 2024 4:36 pm
earlevel wrote: Fri Apr 05, 2024 2:08 am
tl;dr: The important point being that the worst-case step size is 24-bit accurate, not 23.

I hope I explained that clearly. Does my explanation make sense to you?
It makes sense to me.

Another thing to focus on when this 24-bit range could present issues is how well-defined the top of the parameter range limit is. If we have linear steps (in float) of (1 / 2^24), then lower exponent bitfield values will generate "gaps". And until we reach the "bottom", is it 23 or 24 exponent values away from the top of the range? Once that bottom is reached, the linear step resolution must change for worse, unless that step size was already larger to accomodate a better linear range of values.
I don't doubt you're making a reasoned proposal here, I'm just embarrassed to say I'm not sure what you're saying. Could you give an example?

The topic can be a little confusing. I admit to a little head scratching till I realized that the only thing that mattered was the resolution (the step size), and not how many bits may or may not be in play. With float, the worst case is 0.5 to 1.0, 24-bit, and while it's better as we go lower, in the general case that's all we can rely on. With controls, we're typically dealing with things that mimic limits of the physical world, where 24-bit resolution is comfortably at the point where we can't perceive a difference by taking smaller steps anyway. So the fact we get improved resolution for smaller values isn't important, it's a statement of how it works.

I think it's easiest to think of the ticks on a ruler, then imagine a stretchy ruler. Put a "0" point near the bottom of piece of paper, and a "1" point near the top. Pin the start of the ruler to 0, and the end to 1. From the halfway point to 1, the ticks define the resolution, the step size. But we need to re-evaluated the first half by stretching the rule from 0 to halfway—the step size over that range is half the size. And we repeat for s long as we care to. The limit is the point where float32 becomes denormalizes and we're just using the mantissa.

With that in mind, where are the gaps you are considering?
My audio DSP blog: earlevel.com

Post

earlevel wrote: Fri Apr 05, 2024 5:40 pm

With that in mind, where are the gaps you are considering?
Assume a knob control for a parameter has a domain of [0,1], and linear steps of 0.125f (9 total values). For the range (.5,1], there are only 4 values of the 8388607 possible that will be set. In the mantissa bitfield range, almost all of the bits will have a 0 value. For the range (.25,.5], only 2 values and still 8388607 possible. (.125,.25] only 1 and still more possible. The next range is interesting, (0,.125] is only 1 value like the previous, is non-zero, this seems similar to a denormal number in a way... and then zero is zero. This linear step has provided 9 discrete values in a range of 2^30 possible values. If we call our top domain limit 8.f now, and scale the linear step the same by 8, step size is 1.f, but now the range has the amount of possible values being 2^30 + 8388608 * 3. The (1 / 2^24) linear step value is still a far cry from that amount.

OK done editing ;)

Post

camsr wrote: Fri Apr 05, 2024 6:21 pm
earlevel wrote: Fri Apr 05, 2024 5:40 pm

With that in mind, where are the gaps you are considering?
Assume a knob control for a parameter has a domain of [0,1], and linear steps of 0.125f (9 total values). For the range (.5,1], there are only 4 values of the 8388607 possible that will be set. In the mantissa bitfield range, almost all of the bits will have a 0 value. For the range (.25,.5], only 2 values and still 8388607 possible. (.125,.25] only 1 and still more possible. The next range is interesting, (0,.125] is only 1 value like the previous, is non-zero, this seems similar to a denormal number in a way... and then zero is zero. This linear step has provided 9 discrete values in a range of 2^30 possible values. If we call our top domain limit 8.f now, and scale the linear step the same by 8, step size is 1.f, but now the range has the amount of possible values being 2^30 + 8388608 * 3. The (1 / 2^24) linear step value is still a far cry from that amount.

OK done editing ;)
OK...good explanation, but let me struggle through parsing it...

First, it's 16777216, not 8388608 (24 bits of unsigned mantissa).

Linear steps 0.125, OK...

"For the range (.5,1], there are only 4 values...almost all of the bits will have a 0 value"—sure...

"For the range (.25,.5], only 2 values...possible"—sure, the range is cut in half...

"The next range is interesting, (0,.125] is only 1 value like the previous, is non-zero, this seems similar to a denormal number in a way..."—lost me. Denormalization doesn't happen until extremely close to zero (insanely close, for control purposes). For us might as well be zero.

"If we call our top domain limit 8.f now, and scale the linear step the same by 8, step size is 1.f, but now the range has the amount of possible values being 2^30 + 8388608 * 3. The (1 / 2^24) linear step value is still a far cry from that amount."—here's where you lost me. I'll eat lunch and come back to it, but maybe you can elaborate.

The bottom line is that:

1. A step size of 2^-24 in the range 0..1 is sufficient for any control situation. ("Change my mind".)

2. Float32 gives a worst-case step size of 2^-24.

Where do you see either going wrong?
My audio DSP blog: earlevel.com

Post

I will be honest... It is not a good explaination. A singular float value is not all that useful if we consider that it will be used for further math operations which will most likely reduce precision, if the operands go out of the capable range. There is a limit to how small or large a float number can be added to another before it is quantitized and reduce the result precision. The exponent quality of floats allows a shift of range that can be useful for many purposes, but I now consider the exponentiation function of the floating point to be more of a convenience factor rather than a boost to it's actual precision. But truncation is unavoidable even with integers. There is one strategy that will keep the maximal amount of precision with floating points, and thats to add numbers closer together, and multiply numbers with factors on the power-of-2 boundaries. I have written about these things before, but I am attempting to generate responses to the idea that more possible values can have greater precision results, somehow. I have no examples, and I think that any useful example would be so niche to not really form any solid solution to the fact that the mantissa bitrange is literally the limiting factor to floats. The name Float is apt, because of how the exponent works, as long as values in the dataset are within some close distance to each other, it works well, but trying to do some operations with extremely distanced values will always fail as well. But it is possible to encode a 32-bit float as a 24-bit int, if we are only interested in linear step values. Quantization will impart some form of error that may make the point moot.

Post

earlevel wrote: Fri Apr 05, 2024 5:12 pm Well, we weren't talking about step size, therefore it wasn't a point to object to
Never said or implied that you should have ;)
earlevel wrote: Fri Apr 05, 2024 5:12 pm —I brought up step size in my explanation. I'm not trying to take you to task, just explaining my comments. :wink:
Thank you.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

camsr wrote: Fri Apr 05, 2024 9:12 pm I will be honest... It is not a good explaination. A singular float value is not all that useful if we consider that it will be used for further math operations which will most likely reduce precision, if the operands go out of the capable range. There is a limit to how small or large a float number can be added to another before it is quantitized and reduce the result precision. The exponent quality of floats allows a shift of range that can be useful for many purposes, but I now consider the exponentiation function of the floating point to be more of a convenience factor rather than a boost to it's actual precision. But truncation is unavoidable even with integers. There is one strategy that will keep the maximal amount of precision with floating points, and thats to add numbers closer together, and multiply numbers with factors on the power-of-2 boundaries. I have written about these things before, but I am attempting to generate responses to the idea that more possible values can have greater precision results, somehow. I have no examples, and I think that any useful example would be so niche to not really form any solid solution to the fact that the mantissa bitrange is literally the limiting factor to floats. The name Float is apt, because of how the exponent works, as long as values in the dataset are within some close distance to each other, it works well, but trying to do some operations with extremely distanced values will always fail as well. But it is possible to encode a 32-bit float as a 24-bit int, if we are only interested in linear step values. Quantization will impart some form of error that may make the point moot.
Do you mean my explanation wasn't good?? Puzzled...

You're making the mistake, first, of pointing out the limitations of float32 in calculations. But we aren't talking about calculations, we're talking about holding a control value. You can use it in a float64 (or other) calculation just fine. It's like arguing that float32 (or int24) isn't good enough for sample files or buffers because it limits further calculations. It doesn't.

And much of what you say is no less true of integer-based fixed point, the only difference is it's more obvious with integers so people screw up less often. If you have to shift an int-based value right 32 bits to align it for an addition, it's apparent that all your bits went away. :cry:

Realize that for a standard potentiometer, 270 degrees rotation, 24-bit is 62137.8 step per degree. In contrast, unsigned32 gives 15907286 steps per degree. What do we need it for? Not rhetorical, I'm asking for you to show me the black swan. There is no knob on any gear you own, analog to digital, that has close to 24-bit resolution. Johnson-Nyquist noise alone ensures that's not possible. Sure, you can type in arbitrary precision to a control, but it will be indiscernible from the next closest 24-bit step.

Anyway, if you really think so, you need to lobby Steinberg and others to change, not me (I'm just explaining why they're not wrong). Be prepared with a good argument.
My audio DSP blog: earlevel.com

Post

earlevel wrote: Fri Apr 05, 2024 10:24 pm Realize that for a standard potentiometer, 270 degrees rotation, 24-bit is 62137.8 step per degree. In contrast, unsigned32 gives 15907286 steps per degree. What do we need it for? Not rhetorical, I'm asking for you to show me the black swan. There is no knob on any gear you own, analog to digital, that has close to 24-bit resolution. Johnson-Nyquist noise alone ensures that's not possible. Sure, you can type in arbitrary precision to a control, but it will be indiscernible from the next closest 24-bit step.
This is interesting. Whats the best resolution that can be done on a hardware knob/pot in your opinion?
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

S0lo wrote: Fri Apr 05, 2024 10:45 pm
earlevel wrote: Fri Apr 05, 2024 10:24 pm Realize that for a standard potentiometer, 270 degrees rotation, 24-bit is 62137.8 step per degree. In contrast, unsigned32 gives 15907286 steps per degree. What do we need it for? Not rhetorical, I'm asking for you to show me the black swan. There is no knob on any gear you own, analog to digital, that has close to 24-bit resolution. Johnson-Nyquist noise alone ensures that's not possible. Sure, you can type in arbitrary precision to a control, but it will be indiscernible from the next closest 24-bit step.
This is interesting. Whats the best resolution that can be done on a hardware knob/pot in your opinion?
The short answer: The practical limit is about 20 bits with significant design care, to pick a round number. We could argue plus on minus a bit or two. In general, for audio, 22-bit is basically the asymptote of what's achievable.

There's no call for it for controls, though. 18 bits gets you about a thousand steps per degree of knob rotation—a ludicrous amount that you can't possible differentiate in control change—and 20 gets you almost 4000, so no one is going to to design that precision for absolutely no useful gain. Even if could turn that knob with 0.0025 degree precision, the wiper and the granularity of the resistive material aren't going to comply. Even if it were driving a parameter that you could tell the difference in (which is unlikely anyway). Doubtful anyone is going to tell the difference between 24-bit conversion and 10-bit, for that matter. Controls have historically be digitized by converters with far less resolution than audio.

But to stick strictly to the electronics aspect: The most basic limit is Johnson-Nyquist noise (aka thermal noise). Anything with resistance and heat will have it, and by heat I mean above absolute zero. And a potentiometer is a resistor, and there are plenty more in the circuit including converters. To be completely transparent, for controls you can cheat—they don't change often, you can average the noise away, dependent on how long you want to take for the measurement and how much money you want to throw at something that can't be perceived.

Trying to keep this short, I just deleted three paragraphs. The bottom line is that even 24-bit DACs only 20-21 bits, and 22 is about the limit possible. But no one should cry, that's due to the limits of the analog circuitry and physics, so nothing in your system, at any price, is any better, and your amp is way worse.
My audio DSP blog: earlevel.com

Post

Well 20 bits is really good!!. Even 18 bits is way more than what I thought possible. People have been waiting for ages for true 14 bit MIDI pots to no avail. Even the ones who claim to support it seam to use some sort of lame trickery, like using rotary encoders that require multiple spins or just skip values bluntly. But that’s the tip of the iceberg. Here is a discussion I had years back about it:

https://modwiggler.com/forum/viewtopic. ... 6694f8a5ee

People in that forum live and breath hardware every day. If you like, take your time to read as some responses are interesting.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post Reply

Return to “DSP and Plugin Development”