Does volume change of digital audio need to be done on zero crossing - not to produce artifacts?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Thinking if doing it on any sample, not synced on zero crossing, it would be like non-linear stuff in an amp, or something?

Thanks.

Post

Any sudden change of volume, even at zero crossing, should be avoided. It clicks due to the sudden change...
We are the KVR collective. Resistance is futile. You will be assimilated. Image
My MusicCalc is served over https!!

Post

Volume change at zero crossing still produces artifacts. It just may have less noticeable effect if compared to a change at an arbitrary point.

... it would be like non-linear stuff in an amp, or something?

Not quite. It's more about producing a "discontinuity".

Post

Thanks for speedy replies.

There is this test suggested at YT showing daws have very difference quality on response on automation ramps - so I figured this might be the thing, some do it on zero crossings and some don't.

https://youtu.be/tXBRLdgzZ4s

Some daws puke artifacts - some are clean - doing automation ramps.

None of them produce any clicks though, just artifacts something like -50 dB below test signal. Many daws I used have toolbar option if to do splits on zero crossing for the purpose of no clicks.

Any other ideas or am I on to something?

Post

The standard method to minimize a parameter change (incl. level) artifacts is to spread the change in time (this is usually referred to as "smooth"/"interpolate"). I.e. instead of changing the level instantly, you (i.e. DAW or plugin) do it in much smaller steps over a certain amount of time (usually > a few tens of milliseconds ).
And this is what this video is about - some DAWs/plugins do this, some don't (or use "cheaper" algorithms).

Post

If you want to adjust clip gain then you would likely want to either slice at a zero-crossing or envelope the clip; unless you are overlapping two with an equal power cross-fade and then it doesn't matter.

I would therefore suggest that it doesn't matter where you apply gain automation if the ramp is smooth enough or if the adjustment isn't dramatic. Outside of emulating amps, all a digital gain plugin should do is multiply the signal by the amplification factor with a value of 1 as unity. If you want to create higher rates of fluctuation with an LFO then you're entering the world of amplitude modulation, where sidebands are generated - which might then require some degree of oversampling to reduce aliasing.

I don't have time to watch the above video but if you're using FL Studio then there is modulation smoothing available in the "Link to..." dialogue box. This can be set by time so you can sync it with BPM. Smoothing also makes logic gates more useful. Just tweaking the time turns the feature on so you don't need to turn it on first.

Post

Max M. wrote: Sat Mar 16, 2019 2:37 pm The standard method to minimize a parameter change (incl. level) artifacts is to spread the change in time (this is usually referred to as "smooth"/"interpolate"). I.e. instead of changing the level instantly, you (i.e. DAW or plugin) do it in much smaller steps over a certain amount of time (usually > a few tens of milliseconds ).
And this is what this video is about - some DAWs/plugins do this, some don't (or use "cheaper" algorithms).
Thank you.
I remember to avoid pops on start of transport and similar in a daw, setting about 10 ms fade in or so make it rather smooth and not heard.

But as automation goes, people do all kind of things, jumps between level of a parameter and ramps to meaning change on every sample basically.

If any change happend close to zero, is my thought, it would be if any pop around where noise floor is.

20 Hz, meaning 25ms each half period - that would be rather ok interval to do change - at worst if having that low fundamental frequency present.

I never did audio programming, but thought this must be fundamental basics how you do this, or?

Post

Unaspected wrote: Sat Mar 16, 2019 2:42 pm
I would therefore suggest that it doesn't matter where you apply gain automation if the ramp is smooth enough or if the adjustment isn't dramatic. Outside of emulating amps, all a digital gain plugin should do is multiply the signal by the amplification factor with a value of 1 as unity. If you want to create higher rates of fluctuation with an LFO then you're entering the world of amplitude modulation, where sidebands are generated - which might then require some degree of oversampling to reduce aliasing.

I don't have time to watch the above video but if you're using FL Studio then there is modulation smoothing available in the "Link to..." dialogue box. This can be set by time so you can sync it with BPM. Smoothing also makes logic gates more useful. Just tweaking the time turns the feature on so you don't need to turn it on first.
Thanks.
But user choose the speed or ramps or size of jumps - when they apply automation.

Seems interesting that if you modulate and get some randomness to it - it is less of disturbance, or?

My old Sonar Artist 2015 performed really well testing this, no sign of any artifacts. But version 2018.11 I think it was in video he had some serious remarks.

So one wonders - how does this happend they introduce this kind of thing that was ok before?

Post

lfm wrote: Sat Mar 16, 2019 2:58 pm So one wonders - how does this happend they introduce this kind of thing that was ok before?
They probably optimized it in some manner and thought "nobody's going to notice it doesn't work like before".

Post

If any change happend close to zero, is my thought, it would be if any pop around where noise floor is.

Well, not quite. It all depends on how big the change is and what kind of signal it's applied to. A harmonically rich sound will happily mask artifacts of (relatively) dramatic changes in level, while clean sounds (with a pure sine wave being an edge-case) will suffer even from (relatively) moderate jumps.

... jumps between level of a parameter and ramps to meaning change on every sample basically.

And this is why there's no "ideal/all-purpose" timing/formulae for a smoothing. Higher smoothing rate -> more responsive automation but higher clicks/pops, lower rate -> the opposite. So every developer has to make his hard decisions on how to handle that (depending on what kind of parameter it would be, how it's usually manipulated, etc. etc. At some point maybe even provide an option to control the smoothing time/curve explicitly).

20 Hz, meaning 25ms each half period - that would be rather ok interval to do change

Still much worse than interpolating it for the same 25ms ;)

Post

Max M. wrote: Sat Mar 16, 2019 3:34 pm 20 Hz, meaning 25ms each half period - that would be rather ok interval to do change

Still much worse than interpolating it for the same 25ms ;)
I haven't done audio development, just got common sense when regarding that.
I have to trust you - but still can't let it go, quite....if it's ok to argue a bit....

If you are deforming a half period in the middle somewhere, that is would generate more artifacts - a bit like i meant by non-linear amp through that section.

But doing while crossing zero - nothing in period is different in frequency content other than peaks of waveform from then on - if continuous at that level.

Changing the risetime between samples - also affect frequency content - so disturbing that create artifacts - is the idea.

If relative size/value of each sample remains the same - frequency content is not changed - just volume.

If a certain tilt of ramp - this changes over like 25ms, and if processing blocks you start close to zero of samples would not damange waveform itself.

If you've got square wave this could mean rapid change, but that frequency content is the same as previous before volume change.

Since this talks to my common sense - this is hard to let go. ;)

Post

Xenakios wrote: Sat Mar 16, 2019 3:29 pm
lfm wrote: Sat Mar 16, 2019 2:58 pm So one wonders - how does this happend they introduce this kind of thing that was ok before?
They probably optimized it in some manner and thought "nobody's going to notice it doesn't work like before".
Or they just forgot to test after some other improvement they did.

One would think it should show - if like my tests on one daw was -50dB on third harmonic(test is 1kHz and at 3 kHz then -50 dB below) that is 0.3% THD added.

Post

lmf
I haven't done audio development, just got common sense when regarding that.

A good start would be to generate some sinusoids, apply these different methods of volume change and then listen to the results with your own ears. (Should be pretty easy in most of audio editors). ;)

But doing while crossing zero - nothing in period is different in frequency

Well, we don't hear "sounds" by their single periods and shapes of each period (nothing even close). And a single period does not have any "frequency" in a casual "non-mathematical" sense. (I won't get into details since this ("describing sounds w/o math") is where my English ends. Probably some kind people here will suggest a good introductory readings on the matter for you).

Changing the risetime between samples - also affect frequency content

It does. But so does the change at zero-crossing - that's the key. Smoothing the change over time moves (sort of) the most of the artifact energy from its HF components to its LF components (down to the point - depending on how long it is - where we don't perceive it as a distinct sound anymore). There's no way a zero-crossing jump can produce less hearable artifacts (unless you do it periodically at some harmonic/multiple-of-the-carrier rate so that it becomes a sort of waveshaping on its own, or unless the processed sound itself is close to some kind of noise).
Or (maybe in other words) roughly, we can think of smoothing as "the same artifact going through a kind of lowpass filter".

Post

Max M. wrote: Sat Mar 16, 2019 6:00 pm
Changing the risetime between samples - also affect frequency content

It is. But so does the change at zero-crossing - that's the key. Smoothing the change over time moves (sort of) the most of the artifact energy from its HF components to its LF components (down to the point - depending on how long it is - where we don't perceive it as a distinct sound anymore). There's no way zero-crossing jumps can produce less hearable artifacts (unless you do it periodically at some harmonic/multiple-of-the-carrier rate so that it becomes a sort of waveshaping on its own, or unless the processed sound itself is close to some kind of noise).
Or (maybe in other words) roughly, we can think of smoothing as "the same artifact going through a kind of lowpass filter".
I will try and explain why I stick to my theory.

What do we have as this simple test made in video - a 1 kHz sine wave lasting 8s at 48k sr.

In my case I made a ramp second half of that, meaning 4s going from -6 dBFs to -inf.

If we are processing each sample diminishing during these 4s what would happend is that entire 4s there is not one period that is sine anymore - meaning there is distortion. And this is what we see in Sonic Analyzer printing all these multiple bands showing other frequency components than original 1 KHz.

Qeustion - how many full sines are left during this decline in volume
Answer: none.
Question - How many half periods according to a sine are we seing?
Answer: none
Question - So how much altered frequency content do we have
Answer: Plenty
Question: How good is that sounding?
Answer: as good as 0.3% distortion is sounding, maybe adding that from a 100 track it adds up in the end.

The basic idea I had was to create volume steps small enough to be perceived and continuous by ear.

If we take my example going all way down to 20 Hz - that is 50ms each full wavelength, meaning half of that is zero crossing every 25 ms.

So if we have 4s - how many 25ms intervals is that?
Answer 160

So if we go from -6 dBFs to lets say -86 dBFs and call that silent enough.
So drop is 80 dBFs - meaning if going in 0.5 dBFs steps this is 160 steps and we could do this every 25 ms and we win plenty. 0.5 dBFs at rate 20 times a sec should sound pretty smooth, I think.

And all the way - every half period will remain it's frequency content and not producing these awful distortion seen in Sonic Visualiser.

As a retired programmer I see plenty ways of doing this nicely - setting a flag when seing automation is different, wait for zero crossing, calculate how much to diminish all samples from now on - until seing next change in automation level.

This is what makes sense to me - and why I stick to this idea - getting rid of hiddeous distortion producted just because you alter levels of audio over time.

Certain changes in automation can produce falling down a staircase effect - audible and all has to be done in smart dynamic way.

As in the case of doing a nicer sounding total fade out like 8s full - you can go 0.25 dB each step - making it even smoother.

Anyway that the idea in full.

Now please tell me where this reasoning fails?
You brought up some things in much to short of an explanation.

You favor distortion in what is then to be summed to full mix?

Post

lfm
And this is what we see in Sonic Analyzer printing all these multiple bands showing other frequency components than original 1 KHz.

In properly smoothed 4s fade-out of 1kHz sine, the distortion is hundreds dB below the usual noise level. So there's something wrong with your measurement method (or with that Sonic Analyzer).

---
For the rest, your main mistake is that you assume that a single sine-shaped period (half or full cycle) is itself a "sinewave" having only a particular frequency (w/o any other bands).
It is not. The cycle becomes a sinewave w/o any extra-band harmonics only when repeated infinitely (ideally infinitely, in reality it does not have to be infinite but just long enough for the parasite harmonics to go below SNR).
Thus your theory is wrong right at its first step by expecting that you either don't have any distortion by chaining two sine-shaped cycles of different levels. Or that this instant X distortion is somehow lower than "N times X/N" distortion of smoothing, just because you preserve the shape of a single wave cycle.
(+ a few more mistakes at next steps).

Post Reply

Return to “DSP and Plugin Development”