How does a dynamic eq work?

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Max's equation seems to work. Though it would still be fun to see the derivation. From the transfer func. or a difference equation.

What's strange though is that it doesn't seem to sound the same as the band-shelf I was using earlier. Or then I'm imagining the difference in how it behaves. I haven't A/B tested it.
Both are implementations of this: http://www.musicdsp.org/files/Audio-EQ-Cookbook.txt

Post

Of course it works:
- (x + A(x)) /2 is the part outside the bandwidth you want. It stays constant
- (x - A(x)) / 2 is the part inside the bandwidth, so the K factor is the gain.

If you want the derivation, just enter (1 + H(z)) /2 + K/2 * (1 - H(z)) in any formal solver (Mathematica for instance) and you will get the result. The result may be different, as with this formulation, it seems that the denominator won't have any K factor, contrary to RBJ filters (unless I missed something again).

Post

OTT but I decided to make a slew rate envelope compressor. If you have reaktor you can test it out.
http://www.kvraudio.com/forum/viewtopic ... 6&t=445156

Post

Message removed

Post

JCJR wrote:Convert the samples to something like dB. You wouldn't have to multiply the log10 by 20. And a natural log or log2 would do just as good, just different scaling.

Do linear smoothing on the log space samples. One could convert the finished envelope back to non-log scaling if the algorithm requires it. For feedforward designs, maybe it would be most convenient to leave the envelope in log space for doing the gain calculations.
Captain Obvious yet again reporting for duty-- Was thinking more on it. For instance a method that does--

Square the samples.
Convert the squared samples to log.
Calculate running average on the log squared instantaneous values.
AntiLog the smoothed envelope
Square root the smoothed envelope.

Doh, taking the mean of log values, after antilog, has calculated Geometric Mean rather than Arithmetic Mean! Adding logs is the same as multiply of the original non-log values.

So the envelope would not be Root Mean Square. The envelope would be Root Geometric Mean Square?

https://en.wikipedia.org/wiki/Geometric_mean

RGMS ought to be a lower value than RMS. Though it ought to work about the same in a compressor, just at a different threshold?

Supposedly a geometric mean is less sensitive than arithmetic mean to extreme large or small values in a sample set. Dunno if that would offer advantage for dynamics processing. Maybe sometimes.

Perhaps an RGMS would have a lower level of ripple than an RMS, if both use the same amount of smoothing?

Other types of smoothing such as IIR, if applied to log values, ought to also be more similar to geometric mean rather than arithmetic mean?

Does anyone know whether this geometric mean control envelope might be fairly common among extant compressors?

Post

Agreed, grms is lower than rms. Not sure it's actually better, but yes it can limit outliers, as rms can be skewed with them. But if you don't have outliers, I feel like geometric mean is also skewed as well towards lower values.

Post

I'm having some problems with my dynamic EQ implementation when chaining (cascading) two such processors.

The way I do it is take copy of the signal, allpass filter it, create an envelope out of it and then mix it back with the original in some gain factor.

Doing two of this in chain (cascaded) produces some weird effects (there's like no effect with some parameters and some weird filtering with some other settings) and not "two times dynamic eq" which I expect. I'm thinking, can two such processes in sequence cancel each other?

Is this supposed to work?

Post

Fluky wrote:I'm having some problems with my dynamic EQ implementation when chaining (cascading) two such processors.

The way I do it is take copy of the signal, allpass filter it, create an envelope out of it and then mix it back with the original in some gain factor.

Doing two of this in chain (cascaded) produces some weird effects (there's like no effect with some parameters and some weird filtering with some other settings) and not "two times dynamic eq" which I expect. I'm thinking, can two such processes in sequence cancel each other?

Is this supposed to work?
Any help?

Post

If you are using allpass mixed with the original signal to get the boost/cut, do you "freeze" the effect of each dynamic band, by mixing with the original, before processing the next band?

IOW--
Band 1: Allpass original signal, mix with original signal to "freeze" the boost/cut
Band 2: Allpass the previous band output, mix with previous band output to "freeze" the second boost/cut
etc.

IMO with multiband digital EQ, it works most predictable to run each band in series.

If you allpass several bands before mixing with original signal, or run digital filters in parallel, the phase shifts of the bands interact with each other, giving unpredictable results. Running in series, there is still phase change from each stage, but the amplitude response of each stage is "frozen" so that only the amplitude response of each stage can interact.

In analog, it is common to run bandpass filters in parallel, but it is also common to have analog multi-band EQ's with 'interactive' bands because of phase shift oddities in-between bands.

There are probably smarter ways to do it that I don't know, but running each stage in series is a good way to isolate amplitude changes from inter-band phase interaction.

But if you have two bands fairly close together, they will interact due to amplitude changes alone, even if you get rid of the inter-band phase interactions.

Post

Message Removed.

Post

What I'm doing in pseudocode is:

Code: Select all

Process(inputs, outputs, nFrames) {
     in1 = inputs[0]; in2 = inputs[1];
     scin1 = inputs[2]; scin2 = inputs[3];

     for every s of nFrames: // Use outputs as temporary storage
         out1[s] = in1[s];
         out2[s] = in2[s];

     for every processor i:

         processors.at(i)->filt->setFreq(GetFreqByBand(i));
         processors.at(i)->filt->process(nFrames, outputs); 
        // Now outputs[0] and outputs[1] contain allpass filtered input

         // Create envelope out of the allpassed signal
         processors.at(i).setInput(outputs); // out1 and out2
         processors.at(i).setOutput(inputs[2]); // inputs[2] and inputs[3], use scin as temporary storage
         processors.at(i).process(nFrames);
        // Now inputs[2] and inputs[3] contain allpass filtered input, which also have been enveloped

       K = 10^(GetBandGainInDB(i)/20);
       for every f of nFrames:
           // Max's equation for combining these (with amplitude modulation)
           out1[f] = in1[s] * (1 + inputs[2][f] * K) / 2 + out1[s] * (1 - inputs[2][f] * K) / 2;
           out2[f] = in2[s] * (1 + inputs[3][f] * K) / 2 + out2[s] * (1 - inputs[3][f] * K) / 2;
}
This produces desired effect with one processor, but produces odd behaviour with two or more processors.
Last edited by soundmodel on Thu May 05, 2016 4:54 pm, edited 1 time in total.

Post

You must have a mistake in the code, seems good from here (K should be outside the loops, of course).

Post

I can replicate the problem without the enveloping code, which is even clearer that it should work:

Code: Select all

Process(inputs, outputs, nFrames) {
     in1 = inputs[0]; in2 = inputs[1];
     scin1 = inputs[2]; scin2 = inputs[3];

     for every s of nFrames: // Use outputs as temporary storage
         out1[s] = in1[s];
         out2[s] = in2[s];

     for every processor i:

         processors.at(i)->filt->setFreq(GetFreqByBand(i));
         processors.at(i)->filt->process(nFrames, outputs); 
        // Now outputs[0] and outputs[1] contain allpass filtered input

       K = 10^(GetBandGainInDB(i)/20);
       for every f of nFrames:
           // Max's equation for combining these
           out1[f] = in1[s] * (1 + K) / 2 + out1[s] * (1 - K) / 2;
           out2[f] = in2[s] * (1 + K) / 2 + out2[s] * (1 - K) / 2;
}
When another processor is added it sounds like the first one is "shut off". Its effect stops hearing, whereas the second one now works like the first one. Upon adjusting the controls both processors seem to have an effect, but the first one is producing very odd effects (not what it should).

My code is like exactly the above, but am wondering whether the filter library (DSPFilters) is causing some problem in multiple instantiation.

Post

Ok, what specifically happens when a second processor is added:

With one processor the processor works as desired.

After adding another processor,

The gain parameter of the first one flips "backwards". The gain parameter range is flipped so that 0 corresponds to maximum cut and 1 corresponds to no cut. Whereas the second processor has them 0 no cut, 1 max cut.

Can this be some (phase?) issue with the loop above?

Post

The function you're using there is linear interpolation, and you could just replace it with a function to make it look neater.

Code: Select all

template <typename T, typename F>
T lerp(T x, T y, F z) { return x * (F(1) - z) + y * z; }
Now you can use this function on any compatible type like a vector (complex, 3d, etc) or filter coefficients and so on.

All you need to do is call:

Code: Select all

out[f] = lerp(in1[s], out1[s], K);
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post Reply

Return to “DSP and Plugin Development”