Feedback compression missunderstanding

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Hello,
at first I need to say that of course I've already read planty articles about feedback compression.
And I know all those explanation that in feed forward compression the detector (envelope follower) is placed before compression.
And in opose the feed back compressor has detector after compression.
I've also seen many circuits that show it graphically.

Basicly it looks like in such circuit signal goes like that:
INPUT -> COMPRESSION -> DETECTOR -> COMPRESSION -> OUTPUT

But it makes no sense for me.

As I understand the feedback compression idea is that we use ALREADY COMPRESSED signal as a side chain of compressor.

Then the main question is: HOW TO GET THAT "ALREADY COMPRESSED" SIGNAL IF WE DON'T HAVE ANY DETECTOR BEFORE?
How compressor can calculate the gain reduction if it has no envelope follower before?


So in my opinion to make a sense there should be two detectors: one before compressor and one after compressor.

So the signal should go like that:
INPUT -> DETECTOR_1 -> COMPRESSION -> DETECTOR_2 -> COMPRESSION -> OUTPUT


Could anyone help me to understand how it should be?

For any help great thanks in advance.

Best Regards

Post

Hi, there should only be one compressor and detector in the circuit.

Image

Initially (assuming an input occurs after a long period of silence) there is no gain reduction applied to the input until after the detector reacts to the output and then feeds back the gain reduction signal. In an analogue circuit this reaction is nearly instantaneous. In a naive digital implementation there will be one sample that is uncompressed before the detector can feedback the gain reduction signal.

Post

Hey matt42,
great thanks for your answer and explanation. It looks like I almost understand it.
I am not sure what you mean "In a naive digital implementation there will be one sample that is uncompressed before the detector can feedback the gain reduction signal".

Do you mean that exactly first sample will be uncompressed?
Or do you mean that there will be latency length one sample?
Because still your explanation could be understand in two ways.

Let me write it step by step for first two samples.
Let's say Gain Reduction (GR) is multipier between 0 and 1. And compressed sample is just multiplication GR * sample. So GR = 1 means there is no compression.
So the steps are:
1) first untouched sample1 goes to compressor;
2) sample1 is multiplicated by GR. But there is no gain reduction calculated so GR = 1, so sample1 goes out from compressor still untouched;
3) sample1 is going to detector which calculates GR. And now GR = 0.8 (for example).
4) sample1 goes to compressor again. Now GR is smaller then 1 so sample1 goes to output compressed. (So why did you say first sample is uncompressed???)
5) Now second untouched sample2 goes to compressor. There is already GR calculated for previous sample1. So sample2 goes out compressed.
6) already compressed sample2 goes to detector which calculates new GR.
7) sample2 goes to compressor where is multiplicated by new GR.
8) now sample2 is double compressed and goes to output.

So where is mistake in my understanding?

For any more help great thanks in advance :)

Post

pajczur wrote: Thu Nov 26, 2020 4:31 pm 1) first untouched sample1 goes to compressor;
2) sample1 is multiplicated by GR. But there is no gain reduction calculated so GR = 1, so sample1 goes out from compressor still untouched;
3) sample1 is going to detector which calculates GR. And now GR = 0.8 (for example).
4) sample1 goes to compressor again. Now GR is smaller then 1 so sample1 goes to output compressed. (So why did you say first sample is uncompressed???)
By re-inputting sample one (step 4) you have implemented a strategy to get around the 1 sample latency of a direct digital implementation of the flow diagram. This still won't be the "correct" value and it may take a number of iterations to converge on an accurate value.

Lets say we iterated a million times, feeding the input back in each time(completely unnecessarily of course), it wouldn't mean there are a million compressors in the schematic, just that we did a tonne of iterations to arrive at the correct result of a single compressor.

Post

It looks like it's still too hard to understand for me.
What can I do. But great thanks for your replay.

Post

matt42 wrote: Thu Nov 26, 2020 1:55 pmIn a naive digital implementation there will be one sample that is uncompressed before the detector can feedback the gain reduction signal.
The "one uncompressed sample" as such is largely irrelevant, since compressors (or even limiters) practical never react instantly anyway (and often attack is intentionally increased in order to make it even less so). Even with a slow attack such as 1ms the single samples doesn't really make that much of a difference.

The bigger issue with the "one sample delay" in a feedback loop is that it changes the loop dynamics. In general, adding additional delay into any control loop makes the loop less stable, at least unless the time-constants are made longer to compensate. Now "less stable" doesn't necessarily mean that it's going to blow up, but the behaviour is likely to become slightly more erratic for fast attack/release time-constants (where as with sufficiently long time-constants it's essentially irrelevant) and as a result it will likely sound somewhat different from a "instantly" acting analog compressor.

Post

Hello,
great thanks for all your answers.
Unfortunatelly I still don't understand the logic of feedback compression. The problem is somwhere in my head.

But lastly I found in my head one more explanation of feedback compression. Please just tell me if it's proper logic (I am asking about logic, not procedure).

- I imagine that in feedback compression I need to copy the original signal.
- Then for that copied signal I calculate gain reduction and apply that gain reduction on that copied signal.
- Then for such compressed copy of signal again I calculate new gain reduction and apply that new gain reduction on my oryginal (input) signal.
- That's all.

Is it the logic of feedback compression or not?

Post

pajczur wrote: Fri Nov 27, 2020 2:52 pm - I imagine that in feedback compression I need to copy the original signal.
- Then for that copied signal I calculate gain reduction and apply that gain reduction on that copied signal.
- Then for such compressed copy of signal again I calculate new gain reduction and apply that new gain reduction on my oryginal (input) signal.
- That's all.
That's not how it works with feedback.

Rather the way it works is that you take the input and you apply compression as per the current target gain. Then this compressed signal is sent to the envelope detector that computes a new target gain, which then goes back to the compressor. The compressor and the envelope detector form a loop where each one depends on the output of the other, which is what "feedback" really means.

Now, the comment made earlier about "naive one sample delays" is because when the compression and the gain detection both depend on each other, you obviously can't compute one and then the other without introducing a delay into the loop. So if you for example compute the compression part first and then use the output to compute the gain reduction, the resulting gain reduction can be used to compress the next sample, but in a sense it's one sample late. What you can do though is describe the whole thing as a system of differential equations and then solve for the feedback (eg. iteratively using Newton's method).

Post

(Think I had something wrong in my logic, sorry, will re-think how to explain)
Confucamus.

Post

Great great thanks for all your suppert and patience.
I still have to think about it. But now it looks much better for me. Great thanks.

Post

Also one very good man (David Kulka) on other forum helped me very much by comparing feedback compression to hot water heater.

His explanation is:
"Compare it to a hot water heater, which is also a feed back type of device. Cold water goes into the heater and is heated by the heating element. Hot water comes out. The hot water at the output is split off (multed) to to the output pipe and also to the temperature sensor, which controls the heating element."

Post

pajczur wrote: Fri Nov 27, 2020 6:09 pm His explanation is:
"Compare it to a hot water heater, which is also a feed back type of device. Cold water goes into the heater and is heated by the heating element. Hot water comes out. The hot water at the output is split off (multed) to to the output pipe and also to the temperature sensor, which controls the heating element."
Right.

One thing I want to note here (that applies to both the water heating case and an analog feedback compressors) is that measuring the output (rather than the input like in a feedforward design) has the advantage that this kind of design is less sensitive to component tolerances: we might not know exactly how much power the heater needs or we might have a highly non-linear gain element which we don't know the exact transfer curve, but when we look at the output we can still figure out which direction we need to adjust the system (too cold: apply more power, too hot: apply less power).

So while in digital it's generally easier to just measure the envelope and apply gain in a feed-forward fashion, in analog that's actually somewhat more difficult in the sense that you need more accurate components (which is not a huge deal these days, but the situation was a bit different back in the old days when many of these things were designed). This is likely why older analog compressors all tended to use the feedback type and the fact that the resulting dynamics (which are different between the two if you use a simple envelope follower for each) are often preferred is probably mostly a happy accident.

Post

So, another go at explaining, hopefully this time I have the logic correct.

As far as I understand it, there are two ways to do feed-back compression. The accurate way would require to introduce a 1-sample delay between input and output, which means the host needs to latency-compensate, but the easier and instant way would be to just fake it.

You can fake it by keeping the gain reduction factor of the previous compression calculation buffered in a variable. When the first input sample arrives, obviously there was no previous sample, so there is no previous gain reduction factor stored in the buffer. You need to magically create such a value, and the easiest way is to just initialize the buffer variable with a value of 1.0, which is equal to zero gain reduction if multiplied with a sample. So, the first sample arrives. You copy this sample, multiply the copy with the buffered gain reduction factor, currently 1.0, and send this gain-reduced copy though the envelope detector. This will give you a new gain reduction factor which you write into the buffer variable. Then you apply that new gain reduction factor to the original input sample, and pass it on to the output. When the second sample arrives, you create a copy of that second input sample, multiply the copy with the buffer variable (which now holds the previously calculated gain reduction factor), and send that through the detector. The detector calculates a new gain reduction factor again, which you first write into the buffer variable, and then apply to the second input sample to create your second output sample. If you scale the buffered gain reduction values before applying them to the input samples, you can control the intensity of the feedback.

Now, is this really a bad thing? I'll do the math for you, but you can decide for yourself. At 44100 Hz project sample rate, 1 sample is exactly 1/44100 seconds long, which is about 0.0227 milliseconds or 22.7 microseconds. So if the compression is applied 1 sample too late, that means it starts 22.7 microseconds after it should have actually started. A full oscillation cycle at 22050 Hz, the highest possible frequency at 44100 Hz sample rate, takes 1/22050 seconds to complete, which is about 45.4 microseconds and thereby a lot slower than the delay at which the compression sets in. Since an envelope follower always needs two values to calculate its new state, the detector circuit will have to run over two samples before it can create an actual envelope. At 44100 Hz sample rate, your fastest attack time would be back at 45.4 microseconds again, and 20.8 microseconds at 96000 Hz sample rate. For reference, the hardware 1176 compressor spec sheet says its fastest attack time is 22 microseconds. Now you be the judge, do you think this 1-sample inaccuracy really matters? :)
Confucamus.

Post

In some compressor projects, I have used dynamic clipping along with the feedback compression to get better peak management. It's only of benefit when doing fast attack compression along with high gain reduction. If used correctly, the clipping reduces sidebands, at the expense of damping the highest frequencies. So that makes it less snappy.

Post

Rockatansky wrote: Fri Nov 27, 2020 6:48 pm So, another go at explaining, hopefully this time I have the logic correct.
Great thanks Rockatansky. Yes your explanation is very helpful also.
And now when I understand the feedback compression idea I wonder if your idea to fake the 1 sample delay could be shown as circuit like that:
_image.jpg
Where:
- both ∑ are the VCA
- β(s) is sidechain circuit (the transfer function)
- "A" is output amplifier.

What do you think about such idea?
Each sample will always be compressed like in feed forward compression, but at the same time by controlling first VCA by feedback sample we provide feedback compression. Is it good solution? Has anyone tested something like that? And how such solution will shape the sound in compare with classic feedback compression?
You do not have the required permissions to view the files attached to this post.

Post Reply

Return to “DSP and Plugin Development”