Using Double instead of Float increase noise?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

You could clamp the minimum size of a value to zero (or some other value) for other purposes than denormal avoidance.

Often such clamping might be a much bigger threshold than avoiding denormals. If you tried to use denormal flushing as a noise gate it wouldn't work very well, because noise in typical music which originated in the real world will most likely be at least -80 dB for an excellent-recorded track and probably the noise would be quite a bit bigger than -80 dB. That is just the noise added by every transistor in the path from microphone thru the ADC when the signal finally becomes digital. Denormal flushing happens hundreds of dB down, so it wouldn't touch real-world noise levels.

You could do a crude noise gate simply by zeroing small samples but it would probably sound awful, depending on how you do it.

For instance if you have a float signal that came from 16 bit audio, about 96 dB dynamic range. If you simply zero-out any sample value less than -84 dB, it would be a demented bass-ackwards way to turn the audio into equivalent of a 14 bit sample depth. A slightly slicker way would be to measure a long-term envelope of the audio level, and if the envelope is less than -84 dB (or whatever is the threshold) set the STRING OF SAMPLES to zero until the envelope rises above the threshold. In some cases that might not sound completely awful. But a decent noise gate needs to smoothly fade down to 0 under the threshold, and smoothly fade up from zero above the threshold.

Audio will usually get dithered somewhere along the way. Dithering can encode audio below the level of the least significant bit of the integer sample. In other words, it would be desirable to gate on a threshold smaller than -96 dB for 16 bit audio, or gate on a threshold smaller than -144 dB for 24 bit audio. If you just chop off to zero at -96 or -144 dB you might be chopping off useful information. The ear would have a hard time hearing signals that soft, but big loud signals pass thru zero on the way between positive and negative peaks. So if you zero-out tiny values in an audio stream, if your zero-threshold is too big, you could be introducing zero-crossing distortion to the signal. Zero-crossing distortion can sound rather nasty even in tiny amounts. https://en.wikipedia.org/wiki/Crossover_distortion

Post

You may sometimes need to clamp values which are part of your program process, rather than audio samples.

For instance, before the hardware denormal flushing, you could get in trouble with rather simple things. A very simple example--

You have a first order IIR filter that is doing an amplitude fade-in from silence (0) to full volume (1). The output of the filter is multiplied by the samples to make the amplitude fade-in.

If done like this per sample, FadeVal initially set to 0.0--
FadeVal = FadeVal + (1.0 - FadeVal) * FadeSpeedCoff;
OutSample = InSample * FadeVal;

Unless the fade speed is VERY VERY slow, the first iteration will probably transition FadeVal from 0 up to some small value bigger than a denormal. Little risk at the beginning of the fade.

But when FadeVal gets VERY close to 1.0 near the end of the fade, the temporary term (1.0 - FadeVal) can and will denormal if you just keep doing it forever without either checking for denormals or implementing a new-fangled way of telling the CPU to flush denormals.

So this denormal doesn't happen in the audio samples, and FadeVal is not a tiny number, it runs into trouble as FadeVal gets "too close" to its target of 1.0. The denormal happens in part of the control signal temporary calculation. Adding low-level bias noise to samples wouldn't prevent this kind of plugin (or application) denormal because the denormal doesn't directly involve the sample values.

In cases like this, it would be a question of what is fastest or most efficient. Nowadays branches can take more time than doing useless math, so maybe it would run faster (with hardware denormal flushing) just to keep calculating that silly fade "forever". Alternately you could have if/then cases to turn the silly thang off when FadeVal gets acceptably close to 1.0

Code: Select all

if (FadeVal < 1.0)
(
  FadeDiff = 1.0 - FadeVal;
  if (FadeDiff < 0.001)
    FadeVal = 1.0;
  else
    FadeVal += (FadeDiff * FadeSpeedCoff);
);

Post

Nowhk wrote:
Miles1981 wrote:You mean decrease in SNR? If the noise floor increases, then SNR diminishes.
Oh yeah, my fault! Sorry!
Miles1981 wrote:I don't think that they add that much noise. We are talking about values that are small for floats, ridiculously small for doubles compared to the range of values audio has.
[...]
Or maybe you are just saying that the signal between last normals and the last denormals (1.18 × 10E-38 - e 1.4 × 10E-45) is nothing compared to the signal between a fixed (usual) threshold/bit depth (-120DBFS) to last normal (1.0 * 10E-6 - 1.18 × 10E-38).
Yes, exactly! Adding a small signal at 1e-6 will most definitely degrade your SNR, but that's because it's way too larger. You could add a much smaller number (say 1e-20 or something) and it would still work and it would have significantly less effect on SNR, but even this would still be worse than just letting the CPU flush the denormals to zero..

In other news, it'll cost you non-trivial amount of CPU to add and remove those small numbers, where as the CPU flag is essentially free.

Anyway, point is: it's not the denormals (or lack of denormals) that is causing your SNR to degrade, it's the heavy handed method for getting rid of them. Use a better method, let the CPU do the work for you!

Post

mystran wrote: Yes, exactly! Adding a small signal at 1e-6 will most definitely degrade your SNR, but that's because it's way too larger. You could add a much smaller number (say 1e-20 or something) and it would still work and it would have significantly less effect on SNR, but even this would still be worse than just letting the CPU flush the denormals to zero..

In other news, it'll cost you non-trivial amount of CPU to add and remove those small numbers, where as the CPU flag is essentially free.

Anyway, point is: it's not the denormals (or lack of denormals) that is causing your SNR to degrade, it's the heavy handed method for getting rid of them. Use a better method, let the CPU do the work for you!
SNR to degrade? I'm not talking about degrade SNR :O

Well... maybe I'm expressing bad. Let me restart...

If I limit the bit depth in the "bottom" (from 0 to -120dbfs for example, instead of let going it to -385), I already limit the possibility of signals that I wont hear (noise) to "exist". So theoretically this already will improve the "quality" of my signal. Isnt this true? Am I saying bullshit?

Post

Nowhk wrote:
If I limit the bit depth in the "bottom" (from 0 to -120dbfs for example, instead of let going it to -385), I already limit the possibility of signals that I wont hear (noise) to "exist". So theoretically this already will improve the "quality" of my signal. Isnt this true? Am I saying bullshit?
The problem, if you heavy-handed chop to zero at too large a value-- Big signals you want to hear pass thru zero every time it switches from plus to minus values or vice-versa. Setting the chop-to-zero threshold too large would introduce zero-crossing distortion on all signals, and it wouldn't get rid of noise.

Low level noise doesn't just happen when sample numbers are tiny. For instance if you happen to have steady broadband noise at -60 db peak then in silent parts of the track you would have random numbers in the samples ranging from about -0.001 up to about +0.001.

If a loud -6 dB peak sine wave kicks in, that random noise peaking between -0.001 and +0.001 gets added to the peak of the sine wave and it gets added to the mid-values of the sine wave. The noise is super-imposed over the entire waveform.

So if you try to just chop to zero all samples below -60 dB, all samples below abs(0.001)-- It wouldn't eliminate the noise except near the sine wave zero crossings, and it would also introduce zero-crossing distortion. The sine would be just as noisy as before the chopping, plus you would have added some zero-crossing distortion and probably some audible aliasing caused by the sudden chopping near the zero crossings.

I think any "chop to zero" threshold should also be rather low to accomodate further processing on the audio after your process has finished. Nowadays many people "superstitiously" record at rather low levels. Sometimes intentionally peaking no higher than -10 or even -20 dBfs. If the peaks of a guitar or vocal track are only hitting -10 or -20, then the quietest notes in the un-compressed fresh-recorded track may be lots quieter than that.

So if your process chops everything below -120 dB to zero, and then some later process raises gain by 30 dB or more to squash the snot out of the signal, flatline it for a record release or "hot vocal" in a mix or whatever-- After the next plugin has added all that gain, your chopping-to-zero will be showing up at -90 or -80 dB or whatever.

If it is a quiet-recorded guitar part and the user runs thru your program before hitting a saturation plugin-- The saturation plugin might raise gain lots more than a compressor-limiter combination.

Post

FWIW: If you are using IPlug, then you could also #include "WDL/denormal.h", and do something like denormal_fix(&x) or y = denormal_filter(x). Note that this does the same as you originally had, but then implemented for both float and double.

Post

Maybe CPU flushing is the best. To manually kill denormals we could do something like

Code: Select all

value += 1e-18;
value -= 1e-18;

Post

JCJR wrote:The problem, if you heavy-handed chop to zero at too large a value-- Big signals you want to hear pass thru zero every time it switches from plus to minus values or vice-versa. Setting the chop-to-zero threshold too large would introduce zero-crossing distortion on all signals, and it wouldn't get rid of noise.
You have right, didn't think to this! Thanks! So yeah, it's better to keep within the bit depth and use all of it (more higher it is, more lower zero-crossing distortion will be).
JCJR wrote:Low level noise doesn't just happen when sample numbers are tiny. For instance if you happen to have steady broadband noise at -60 db peak then in silent parts of the track you would have random numbers in the samples ranging from about -0.001 up to about +0.001.

If a loud -6 dB peak sine wave kicks in, that random noise peaking between -0.001 and +0.001 gets added to the peak of the sine wave and it gets added to the mid-values of the sine wave. The noise is super-imposed over the entire waveform.
Yes. But if I have broadband noise at -60db (which is quite imperceptible), once I add to a sine, the difference between the "real" sine wave and the one with noise is noticeable? I don't think so. Maybe for a compressor that will "act" some 0.001 before... or what do you mean?
JCJR wrote:After the next plugin has added all that gain, your chopping-to-zero will be showing up at -90 or -80 dB or whatever.
Yes, true again! This will increment the zero-crossing distortion. Ok got it: chopping to zero (except for denormals) its a bullshit and must be avoied, of course!
Tale wrote:FWIW: If you are using IPlug, then you could also #include "WDL/denormal.h", and do something like denormal_fix(&x) or y = denormal_filter(x). Note that this does the same as you originally had, but then implemented for both float and double.
Hi Tale :) Didn't see those functions was already there! Nice... I can directly use them from the frameworks :wink:
matt42 wrote:Maybe CPU flushing is the best
The only problem I think is that you need to do it (tell the CPU) for every kind of CPU/architectures you will use. By code is at software level, not hardware. And remember to disable it after you finished your process (tinkering with this processor register can be destabilizing to math code in a library that was tested with normal settings).

Post

In that case I'd recommend the approch I gave above you can flush denormals to zero without conditionals and just 2 operations.

Post

Nowhk wrote:And remember to disable it after you finished your process (tinkering with this processor register can be destabilizing to math code in a library that was tested with normal settings).
While the concern about math libraries is legit, in my experience they usually don't get angry about flushing denormals (and indeed this is a standard compiler flag in ICC). Some of them don't really like changing rounding modes though.

As for remembering to disable it afterwards... that's like the easiest thing ever: you make a standard RAII object that stores the old control word in constructor and sets whatever flags you want.. and then you have it restore the old control word in destructor and then it's just a matter of constructing such an object on stack and it'll clean up automatically when it goes out of scope.

Post

mystran wrote:you make a standard RAII object that stores the old control word in constructor and sets whatever flags you want.. and then you have it restore the old control word in destructor and then it's just a matter of constructing such an object on stack and it'll clean up automatically when it goes out of scope.
Basically (to see if I right understand), when you create the object that need to avoid denormal you call this (which tell the compiler I'm working with to ignore denormal for floating point, i.e. for both float and double, which actually set the DMZ and FDZ flags for the CPU that will ignore denormal numbers):

Code: Select all

_controlfp_s( NULL, _DN_FLUSH, _MCW_DN );
And than, when you destroy that object, you tell the compiler to not ignore denormals anymore:

Code: Select all

_controlfp_s( NULL, _DN_SAVE, _MCW_DN );
Is it correct?

Post

Nowhk wrote:

Code: Select all

_controlfp_s( NULL, _DN_SAVE, _MCW_DN );
Is it correct?
NO!!

You want to take the existing state in constructor and store it somewhere and then you restore whatever state existed before the object was constructed. Then if the host already disabled them, or if you nest those objects, or whatever, things will always stay correct.

Also I would forget about the whole _controlfp_s and just use _mm_getcsr/_mm_setcsr intrinsics to just nudge the right bits in SSE control word directly... since you'll probably need to do that on non-Windows platforms anyway.

Post

matt42 wrote:Maybe CPU flushing is the best. To manually kill denormals we could do something like

Code: Select all

value += 1e-18;
value -= 1e-18;
Code like this might be optimized away. ;)

Post

mystran wrote: You want to take the existing state in constructor and store it somewhere and then you restore whatever state existed before the object was constructed. Then if the host already disabled them, or if you nest those objects, or whatever, things will always stay correct.
Of yeah, of course, keeping the previous flags ;)
P.S. Just as curiosity: what if CPU is running two process together and one require denormals and the other not?
mystran wrote: Also I would forget about the whole _controlfp_s and just use _mm_getcsr/_mm_setcsr intrinsics to just nudge the right bits in SSE control word directly... since you'll probably need to do that on non-Windows platforms anyway.
They are still "Microsoft specifics" :o

Anyway, I think I'll go with Tale suggestion for the moment, keeping the flush at "software level" (implicitly, multi platform). If I'll encounter performance problems I'll swtich to "hardware" flush in the future. Thanks!

Post

Nowhk wrote:
mystran wrote: Also I would forget about the whole _controlfp_s and just use _mm_getcsr/_mm_setcsr intrinsics to just nudge the right bits in SSE control word directly... since you'll probably need to do that on non-Windows platforms anyway.
They are still "Microsoft specifics" :o
No, the intrinsics are "Intel CPU specific" and they work on any platform (with any major compiler) as long as you are targeting either x86 or x64.

Post Reply

Return to “DSP and Plugin Development”