## Using Double instead of Float increase noise?

DSP, Plug-in and Host development discussion.
Nowhk
KVRian
775 posts since 2 Oct, 2013
Hi there,

I'm developing a plugin (using C++ and IPlug framework) that read a sample and (later) process it with some effects.
Here's where I have the Process (Double) Replacing which call the reading function (for each sample) of my wavefile:

Code: Select all

``````void MyPlugin::ProcessDoubleReplacing(double **inputs, double **outputs, int nFrames) {
double *outputLeft = outputs[0];
double *outputRight = outputs[1];

...

// process
for (int i = 0; i < nFrames; i++) {
// clear previous sample
outputLeft[i] = 0.0;
outputRight[i] = 0.0;

...

pWavePlayer->Process(outputLeft[i], outputRight[i]);

...
}
}
``````
I'm using linear interpolation for this purpose, so its better using "double" instead of float, for what I understand:

Code: Select all

``````void WavePlayer::Process(double &outputLeft, double &outputRight) {
double outLeft = 0.0;
double outRight = 0.0;

...

outLeft += (pWaveSamples[mPlaybackBase0] * (1 - mPlaybackFrac) + (mPlaybackIndex < mWaveNumSamples - 1 ? pWaveSamples[mPlaybackBase1] * mPlaybackFrac : 0)) * pVoice->mNoteVelocity * pVoice->mEnvelope.mLevel * gainSmoothed;
if (mWaveNumChannels == 2) {
outRight += (pWaveSamples[mPlaybackBase0 + 1] * (1 - mPlaybackFrac) + (mPlaybackIndex < mWaveNumSamples - 1 ? pWaveSamples[mPlaybackBase1 + 1] * mPlaybackFrac : 0)) * pVoice->mNoteVelocity * pVoice->mEnvelope.mLevel * gainSmoothed;
}
else {
outRight = outLeft;
}

...

outputLeft += outLeft;
outputRight += outRight;
}
``````
Later, I chain the free Freeverb implementation, after the wave process:

Code: Select all

``````pReverb->Process(outputLeft[i], outputRight[i]);
``````
It plays just fine, without any problem.

The problem is when I convert the Freeverb implementation from Float to Double: it becomes really (really) noisy.

It looks like increasing the precision increases the SNR
Why this? Should be the opposite? Having more definition, I should be more precise when I quantize later.

Instead, float sound better? If so, why ProcessDoubleReplacing use "double"? Should I also use interpolation/read samples from wavefile in float?

Maybe I'm missing a point...

Thanks to everybody who will help me to got this.

keithwood
KVRist
58 posts since 24 Dec, 2015 from Bristol, UK
It doesn't make a lot of sense! If it's a crackling kind of noisy, it could be that the process call for some reason is taking too long when you switch to double. Reverb is quite CPU intensive. I could imagine compiling a 32-bit debug build that uses double precision would hammer the CPU and cause crackling as the process call might take too long.

Miles1981
KVRian
1355 posts since 26 Apr, 2004 from UK
We don't have your complete code, but I wouldn't be surprised if you are corrupting something somewhere. Doubles are always less noisy than floats.

Nowhk
KVRian
775 posts since 2 Oct, 2013
Miles1981 wrote:We don't have your complete code, but I wouldn't be surprised if you are corrupting something somewhere. Doubles are always less noisy than floats.
I take the reverb implementation of that link, added to my plugin and tested as it was: sounds great.

Than I just "replace" float with double, and it become noisy. That's all.

This is the code keeping float:

Code: Select all

``````void IReverb::Process(double &outputLeft, double &outputRight) {
float outL, outR, input;

outL = outR = 0.0f;
input = (float)(outputLeft + outputRight) * gain;

// Accumulate comb filters in parallel
for (int i = 0; i < numcombs; i++) {
outL += combL[i].Process(input);
outR += combR[i].Process(input);
}

// Feed through allpasses in series
for (int i = 0; i < numallpasses; i++) {
outL = allpassL[i].Process(outL);
outR = allpassR[i].Process(outR);
}

outputLeft = outL * wet1 + outR * wet2 + outputLeft * dry;
outputRight = outR * wet1 + outL * wet2 + outputRight * dry;
}
``````
If I swith to double (thus values into tuning.h and comb/allpass filter) it cracks...

jinxtigr
KVRian
783 posts since 7 Apr, 2007 from Bellows Falls, VT
When I look into the Freeverb code I find it very 'functiony' (everything's tiny chunks of code calling everything else) and I've always had trouble with that stuff.

But, I saw reference to denormals code.

If you change the floats to doubles, and the denormalization is left the same, you're going to get problems. Denormalization is about looking at those floats (particularly in comb filters, IIRs, or anything else that feeds back on itself until it fades away to silence) and typically doing some trick with the float format so you can switch cleanly to a value of zero by comparing your float to some clever magic number using bitwise masking (I could have that wrong, don't go by me, but it's not typically a math-based 'less than' comparison, it's a bitwise comparison that's presumably quicker and hackier). That's because the comparison must be done very often.

The reason this is done is to stop the CPU from dropping into low gear and spending tens or hundreds of times the CPU to accurately represent 'almost zero'

I'm guessing this is why it's cracking. You're doing this hacky denormals calculation, but on doubles instead of floats, and as a result while the denormal test is still happening at a low-ish level, it no longer has any relation to sensible numbers because the bitwise thing doesn't work on doubles. It would have to be changed so it was relevant to doubles.

Most likely if you remove the denormals part, it'll be clean again but much much slower…

Miles1981
KVRian
1355 posts since 26 Apr, 2004 from UK
I don't think there are any issues with denormal numbers anymore.

Nowhk
KVRian
775 posts since 2 Oct, 2013
jinxtigr wrote:I'm guessing this is why it's cracking. You're doing this hacky denormals calculation, but on doubles instead of floats, and as a result while the denormal test is still happening at a low-ish level, it no longer has any relation to sensible numbers because the bitwise thing doesn't work on doubles. It would have to be changed so it was relevant to doubles.

Most likely if you remove the denormals part, it'll be clean again but much much slower…
This is the denormal function I'm using (with both float and double):

Code: Select all

``````#define undenormalise(sample) if(((*(unsigned int*)&sample)&0x7f800000)==0) sample=0.0
``````
How could I test if its this or not? (just for test, even if CPU will go crazy).

Miles1981
KVRian
1355 posts since 26 Apr, 2004 from UK
That's definitely wrong for doubles (and may be wrong with some compilers as well). Definitely not C++ code.
Just remove the function altogether, it won't be that slower (you may actually want to let the compiler make that decision for you, this is what scientific applications usually do).

BertKoor
KVRAF
10634 posts since 8 Mar, 2005 from Utrecht, Holland
Miles1981 wrote:I don't think there are any issues with denormal numbers anymore.
Don't be too sure about that. The last Pentium IV was produced in 2008, and might still be alive & kicking.
And even on Core2 & SandyBridge CPU's the issue has not vanished according to this blog:
https://randomascii.wordpress.com/2012/ ... dd-floats/
We are the KVR collective. Resistance is futile. You will be assimilated.
My MusicCalc is back online!!

Miles1981
KVRian
1355 posts since 26 Apr, 2004 from UK
BertKoor wrote:
Miles1981 wrote:I don't think there are any issues with denormal numbers anymore.
Don't be too sure about that. The last Pentium IV was produced in 2008, and might still be alive & kicking.
And even on Core2 & SandyBridge CPU's the issue has not vanished according to this blog:
https://randomascii.wordpress.com/2012/ ... dd-floats/
Are you fast mode or even using /Qftz on something like the Intel compiler?

mystran
KVRAF
4981 posts since 12 Feb, 2006 from Helsinki, Finland
BertKoor wrote:
Miles1981 wrote:I don't think there are any issues with denormal numbers anymore.
Don't be too sure about that. The last Pentium IV was produced in 2008, and might still be alive & kicking.
And even on Core2 & SandyBridge CPU's the issue has not vanished according to this blog:
https://randomascii.wordpress.com/2012/ ... dd-floats/
If you have a Pentium 4, AMD 64 or practically anything later then you can use SSE2 for FPU math and if you use SSE2 for FPU math (and this is always the case for any 64-bit build and it will almost always make 32-bit builds faster as well) then you can bump the FTZ and DAZ bits in the SSE control word (as described in the blog post you linked) and all denormal problems magically go away.

With ICC the compiler will even try to do this for you (with /Qftz which is part of any of the higher optimisation levels automatically) and you might even find the odd plugin host doing it for you automatically.. but it's not hard to do it manually (eg. with a RAII object in process() to set and restore it automatically) and once you do this, you'll probably never have to think about denormals again.. and it works on Pentium 4.
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

Nowhk
KVRian
775 posts since 2 Oct, 2013
Miles1981 wrote:Just remove the function altogether, it won't be that slower
And I swap that function with... what? How do I tell the algorithm to denormalize under.... -60dbfs? (which is why the function is made right? cut under a fixed SNR...)

Miles1981
KVRian
1355 posts since 26 Apr, 2004 from UK
Start without that call, after if you find that there is an issue, use mystran's trick

JCJR
KVRAF
2341 posts since 17 Apr, 2005 from S.E. TN
Nowhk wrote:
Miles1981 wrote:Just remove the function altogether, it won't be that slower
And I swap that function with... what? How do I tell the algorithm to denormalize under.... -60dbfs? (which is why the function is made right? cut under a fixed SNR...)
Yeah, try without the function and MyStran's advice on telling the compiler to tell the computer to ignore denormals. The code isn't to cut under a fixed signal to noise ratio. In the old days when a float gets really tiny, and the most significant bit of the mantissa is no longer 1, the number is denormalled. In the old days the computer would slow down to a crawl trying to accurately do math on denormalled numbers.

Back then if you happened to be running a sequencer and load a plugin with denormal number problems, at some point, usually in silent parts of a track, suddenly the computer slows to a crawl. Audio stops. If the ASIO driver didn't just crash and blue-screen, you would hear an occasional blip of audio, and the program would be spending so much time calculating those denormal numbers in that bad plugin that it could take frustrating minutes to just make the sequencer stop so you can remove the plugin or reboot. Just sit there clicking the sequencer stop button but the computer is so busy calculating denormal numbers that it takes minutes to finally notice you are clicking on the stop button.

If you find out you need denormal protection and MyStran's advice doesn't work, you can google up a double equivalent of that reverb's float denormal function. It would be about the same code, except with a different masking constant and possibly treating the double as a 64 bit int, rather than treating a float as a 32 bit int.

Miles1981
KVRian
1355 posts since 26 Apr, 2004 from UK
Ended up implementing mistral's trick in ATK (as a class to add to the beginning of a process() call). I'll see one day if this is better