# Kilohearts Plug-ins (My Misunderstanding)

VST, AU, AAX, CLAP, etc. Plugin Virtual Instruments Discussion
RELATED
PRODUCTS
limitlesssss wrote: Thu Jul 18, 2024 8:23 pm All I'm asking is, why should there be a noise floor to begin with in a purely digital signal that is being operated in a 64-Bit FP DAW environment?
Because floating point representations of most numerical values are inexact. A fun example here is that in fp math under the current standards, 0.1 + 0.2 does not exactly equal 0.3. A good highlighting of this is here:

https://0.30000000000000004.com

No matter how many bits of precision you add, you're just moving that 4 a few more zeroes to the right.

Oh by the way, that example (in C, C++, and many of the other languages) is using 64-bit precision

Some information from Reaktor's manual:
More Signal Types

Float Signals

The most common signal type used for DSP (digital signal processing) on modern personal computers is floating point (float for short). Floats can represent a wide range of values, as large as 10^38 (in 32 bit mode) or even 10^308 (in 64 bit mode). As useful as they are, floats have a drawback – limited precision. The precision is higher in 64 bit mode, but it is still limited.

The precision of float values is limited for technical reasons. If it weren’t limited, float values would require an infinite amount of memory to store and processing them would require an infinitely fast CPU. It’s similar to the impossibility of writing the full decimal representation of a transcendental number, such as π, on a finite piece of paper. Even if you can somehow compute all the digits (which is not always possible for transcendental numbers), you will eventually run out of paper (and time).

The signals and memory storage that we have been discussing so far use 32 bit floating point numbers for their representation. Reaktor Core also offers the possibility of using 64 bit floats, should you need higher precision (or a larger value range, although it’s difficult to imagine that 10^-38 to 10^38 is not a large enough range).

By default all processing in Reaktor Core is done in 32 bit floats. This doesn’t exactly mean that the signals are really processed as 32 bit floats, but rather that at minimum, 32 bit floats will be used for processing (although 64 bit floats may occasionally be used for intermediate results).

Integer Signals

There is another data type commonly supported by modern CPUs, and actually this one is more fundamental to the digital world than floats. It is the integer type. Integer numbers are represented and processed with infinite precision. Although the precision of integers is infinite, the range of representable integer values is limited. For 32 bit integers the values can go up to more than 10^9.

Infinite precision for storage and processing of integer values is possible because they don’t have any decimal digits after the period, so you can write them using a finite number of digits. Let’s write down the number of seconds in an hour: 3, 6, 0, 0, done. It’s that easy. If you try to write down the value of π you cannot do it completely: 3, 1, 4, 1, stop. Not complete, OK let’s write a couple more digits: 5, 9, stop. Still not complete, and so on. With an integer number you can do it completely and precisely: 3600, that’s it.

While floating point is a natural choice for values that are changing continuously, as are audio signals, for discretely changing values (for example, counters) integers may be a more appropriate choice.

Many Reaktor Core modules can be switched to integer mode, in which case they expect integer signals at their inputs; they process them as integers (that means with infinite precision); and they produce the integer outputs. Examples of such modules include arithmetic modules like adder, multiplier, or subtractor. There are even some modules that can be used only on integers.

Minimum 32 bit length is guaranteed for Reaktor Core integer values.

Integer signals can be freely interconnected with floats, but the wires created between different type signals will perform signal conversion, which can use a certain amount of CPU. At the time of this writing, the extra CPU usage is somewhat noticeable on PCs and quite significant on Macs. The OBC connections of float and integer types are not compatible with each other, of course.

There can also be information loss during such conversions. In particular, large integers cannot be precisely represented by floats, and obviously, floats cannot be precisely represented by integers. Large floats (larger than the largest representable integer) cannot be represented as integers at all, in which case the result of the conversion in undefined. During float-to-integer conversion, the values will be rounded approximately to the nearest integer. We say approximately because the result of rounding 0.5 can be either 0 or 1, although you can rely on the fact that 0.49 will be rounded to 0, and 0.51 to 1.

It is important to understand that turning the processing mode of an operation to integer and converting of a floating point result of the same operation to an integer is not the same.

--------

Rounding

There is only one type of rounding available in Core by default: rounding to the nearest integer. The other rounding modes (rounding down, rounding up and rounding towards zero) are not provided. There are a number of technical reasons behind this, related to the precision and efficiency of the generated code.

While in principle the other rounding modes could be implemented as Core Macros, often the same Structures can be implemented just using the round-to-nearest mode, thereby not losing the efficiency. For example, an interpolated read from a buffer in a classical implementation rounds the non-integer position x down to an integer n and then performs a linear interpolation between the values at n and n+1, where the interpolation weights are 1-(x-n) and x-n. However, instead one could round (x-0.5) to the nearest integer. Since for values with a fractional part of exactly 0.5 the round-to-nearest direction is not specified, sometimes, for an exact integer x, the value x-0.5 will be rounded to n=x-1 rather than n=x. However, for the interpolation it does not matter, since the interpolation weights in this case will be 0 and 1 instead of 1 and 0, producing the same result.

One might need to ensure that the buffer has at least one 'extra sample' on each side of the region, which is read by the interpolator (this is usually a good idea regardless of the rounding mode used, since there can be precision losses in the computations). These extra samples can be set to an arbitrary value (within the same value range as the typical values in the buffer), since the interpolation weights for the extra samples will be very close to zero.

In situations where a division result needs to be rounded down, usually an integer division is meant semantically and this is what normally should be used as well. Note that the integer division rounding mode is 'towards zero'.

The reason that the integer division rounding mode is 'towards zero' rather than 'down' is that this is the mode supported by modern processors. Implementing a rounding down mode instead would produce a higher CPU load. However, there is no difference between the two modes for non-negative integers.

The rounding down mode of the integer division seems to be generally more useful in signal processing applications. Therefore, if future processors support this mode natively in an efficient way, it is possible that Core integer division will also employ this mode. While in this case Core will probably implement a legacy mode switch, it is recommended to stay away from relying on specific rounding direction when there are negative integers involved in the division.

limitlesssss wrote: Sat Jul 20, 2024 5:22 am Some information from Reaktor's manual:
[snip somewhat dumbed down explanation of floating point precision vs binary encoded integers]
...and? Not sure what the point is?

stoopicus wrote: Sat Jul 20, 2024 5:29 am
limitlesssss wrote: Sat Jul 20, 2024 5:22 am Some information from Reaktor's manual:
[snip somewhat dumbed down explanation of floating point precision vs binary encoded integers]
...and? Not sure what the point is?
No point for you. Just some general info about it from Reaktor's manual.

limitlesssss wrote: Sat Jul 20, 2024 5:30 am No point for you. Just some general info about it from Reaktor's manual.
Ahh gotcha. Yeah, their explanation there isn't inaccurate per se but is more about how Reaktor chose to work with the issues internally and also some of their specific clamping choices.

I changed the title of this thread, because I now understand the bit-transparent issue was caused by rounding error between positive and negative values and not an inherent problem with Kilohearts plug-ins themselves, and therefore it would be unfair to Kilohearts (arguably the greatest audio developers) to leave such a title on.

I Kilohearts.

Yeah. In the end this is just an artifact of how floating point numerical representations and arithmetic work and adds up to no big deal in this case.

Don't feel bad about not knowing this, this also surprises many software engineers when they learn it. In fact programming errors related to this are a common thing to catch in code review.

stoopicus wrote: Sat Jul 20, 2024 8:59 pm Yeah. In the end this is just an artifact of how floating point numerical representations and arithmetic work and adds up to no big deal in this case.

Don't feel bad about not knowing this, this also surprises many software engineers when they learn it. In fact programming errors related to this are a common thing to catch in code review.
That's such a good point. Thank you for your words of encouragement.

limitlesssss wrote: Sat Jul 20, 2024 8:50 pm I changed the title of this thread, because I now understand the bit-transparent issue was caused by rounding error between positive and negative values and not an inherent problem with Kilohearts plug-ins themselves, and therefore it would be unfair to Kilohearts (arguably the greatest audio developers) to leave such a title on.

I Kilohearts.
Much respect for saying this. Many people dig in their heels, particularly in online debates- you've admitted a misunderstanding and that's admirable. Fwiw, this thread got me interested in the broader topic, plus prompted me to restart my Kilohearts sub, good work

swilow11 wrote: Sat Jul 20, 2024 10:28 pm
limitlesssss wrote: Sat Jul 20, 2024 8:50 pm I changed the title of this thread, because I now understand the bit-transparent issue was caused by rounding error between positive and negative values and not an inherent problem with Kilohearts plug-ins themselves, and therefore it would be unfair to Kilohearts (arguably the greatest audio developers) to leave such a title on.

I Kilohearts.
Much respect for saying this. Many people dig in their heels, particularly in online debates- you've admitted a misunderstanding and that's admirable. Fwiw, this thread got me interested in the broader topic, plus prompted me to restart my Kilohearts sub, good work
I appreciate it. Oh and enjoy your Kilohearts sub.

limitlesssss wrote: Fri Jul 19, 2024 3:02 am
SamDi wrote: Fri Jul 19, 2024 12:26 am You could the the following tests to proove or disproove my statements:
...
2.) Make the same tests with 64 and 32 bit, but with +-20dB. Since this is a factor of exactly 10 no rounding errors should occur on both bit depths.
...
...
• Unfortunately when cutting and boosting the same amount (yes even -/+ 20dB) it doesn't null at either precision single or double.
...
and
Tj Shredder wrote: Fri Jul 19, 2024 12:03 am The dB value will be converted into an level. That already will have a rounding error, and those will be different for positive and negative numbers.
There you go…
and
FrogsInPants wrote: Fri Jul 19, 2024 3:35 am
SamDi wrote: Fri Jul 19, 2024 12:26 am 2.) Make the same tests with 64 and 32 bit, but with +-20dB. Since this is a factor of exactly 10 no rounding errors should occur on both bit depths.
You can get rounding errors in floating point operations when multiplying or dividing by 10. This should be especially easy to see with dividing, as 1/10 does not itself have an exact floating point representation.
Yes, this was actually a silly idea from me, since I just had in mind, that +20dB, hence the factor of 10 can be representated without rounding errors, but I forgot that this isn't the case, when multilpying an other number with the factor of ten. Also the statement, that each dB value change from DAW's slider needs to be converted to a number, which has to be multiplied the actual sound signal is true.
limitlesssss wrote: Sat Jul 20, 2024 5:22 am Some information from Reaktor's manual:
I am not sure, if I got your point, but just to make it clear: rounding errors are not inherent to FLOAT representation only, you have this with all kind of discrete numbers, hence also with INT. At the end it's inherent to digital computation.
limitlesssss wrote: Sat Jul 20, 2024 8:50 pm I changed the title of this thread, because I now understand the bit-transparent issue was caused by rounding error between positive and negative values and not an inherent problem with Kilohearts plug-ins themselves, and therefore it would be unfair to Kilohearts (arguably the greatest audio developers) to leave such a title on.

I Kilohearts.
Exactly spoken, it's not the difference between positive and negative values, but because each calculation introduces rounding errors by itself.
limitlesssss wrote: Fri Jul 19, 2024 3:49 am A lot and it probably sounds good to them. And good for them. But then again, this discussion was/is about plug-ins that are supposed to be as transparent as possible.
Do we want plugins as transparent as possible? (e.g. 32 vs. 64 vs. .... 1024 vs. ... bit)

Engineering isn't working like that. The core of engineering is defining requirements to fulfill a certain set of use-cases. Of course making requirements harder are linked with higher costs (not just money, but also computation time or memory) and this 32 vs. 64 bit discussion thing is all about, if the market is willing to pay the higher effort to developers for introducing 64 bit precision all over the place. I have the impression, that currenty not...

limitlesssss wrote: Fri Jul 19, 2024 2:02 pm • All plug-in developers should be transparent about the precision at which their plug-ins work internally and the input/output precision of said plug-ins.
I am not sure, if this information would give us an added value. The rounding errors get accumulated, dependent of how many and which operations you perform in your algorithms. Furthermore you have also computational errors from interpolation algorithms, DSP optimization tricks and approximization algorithms.

So o describe the "error" of an plugin, you would need to compare the final computated result with an ideal theoretical computated signal to get an overall error measurement of your plugin, which then describes the quality.
Last edited by SamDi on Wed Jul 24, 2024 4:24 am, edited 1 time in total.

SamDi wrote: Mon Jul 22, 2024 1:40 am
limitlesssss wrote: Fri Jul 19, 2024 2:02 pm • All plug-in developers should be transparent about the precision at which their plug-ins work internally and the input/output precision of said plug-ins.
I am not sure, if this information would give us an added value. The rounding errors get accumulated, dependent of how many and which operations you perform in your algorithms. Furthermore you have also computational errors from interpolation algorithms, DSP optimization tricks and approximization algorithms.

So o describe the "error" of an plugin, you would need to compare the final computated result with an ideal theoretical computated signal to get an overall error measurement of your plugin, which then describes the quality.
My point was, if I knew for a fact that all the 12 plug-ins that I'm using on a session were in fact End-to-End 32-Bit FP (meaning: their input/their internal processing/their output were all 32-Bit FP and not 64-Bit FP), if I had access to that information, I then would not set my DAW's precision to 64-Bit FP, I'd leave it at 32-Bit FP, so there would be no need for multiple 32-Bit FP to 64-Bit FP conversions and vice versa.

My whole point is, to keep the signal as unaltered as possible. Unless it's on purpose obviously (which is a completely different discussion).

What I was looking for from the get-go, wasn't necessarily a higher precision, but avoiding avoidable conversions to/from different precisions.

And hence, I thought if plug-in manufacturers shared that info, it would be useful for at least some users. And this is just info that they already possess. All it takes for them is to share it.

limitlesssss wrote: Mon Jul 22, 2024 2:32 am And hence, I thought if plug-in manufacturers shared that info, it would be useful for at least some users. And this is just info that they already possess. All it takes for them is to share it.
General rules of thumb :

When not marketed as 64-bit, you can be sure it's 32-bit.

When using an instrument, you can be 99,9% sure that it's 32-bit.

I posted a long list of devs that aren't 64-bit, previously in this thread.

The only devs that are currently 64-bit compliant in my (huge) list of owned plugins :
Acustica Audio, Boz Digital, DMG, Flux, Hornet, RjStudios, Slate, Sonnox, Soundtheory, Steinberg (stock), TDR, Venn Audio, Voxengo.
Not all, but most of the plugins from these devs are 64-bit.

Btw : Not much changed in the last 5 years. I've only seen a handful of products change to 64-bit ever since i first checked.

*edit : added some more 64-bit devs...
Last edited by dionenoid on Mon Jul 22, 2024 8:14 am, edited 2 times in total.