Does volume change of digital audio need to be done on zero crossing - not to produce artifacts?

DSP, Plug-in and Host development discussion.
lfm
KVRAF
4946 posts since 22 Jan, 2005 from Sweden

Post Sat Mar 16, 2019 2:28 pm

Max M. wrote:
Sat Mar 16, 2019 1:23 pm
lfm
And this is what we see in Sonic Analyzer printing all these multiple bands showing other frequency components than original 1 KHz.

In properly smoothed 4s fade-out of 1kHz sine, the distortion is hundreds dB below the usual noise level. So there's something wrong with your measurement method (or with that Sonic Analyzer).
On my Sonar it is something like 80 dB below signal, so that is fine.

But StudioOne is just 50 dB below doing the same thing.
And as I recall as bad for Logic as well.

So think the tool Sonic Visualiser is correct somehow. It has really nice sense of each cursor position you can see all the way how these extra harmonics occur and what levels and freqency and all.

I spent almost 6 hours doing the tests on various daws and various settings I thought would interfer. Difference is massive.

No change in automation all is fine, doing the ramp - all goes wild.
Even recording one track to another - moving a fader manually doing that - same thing.

---
For the rest, your main mistake is that you assume that a single sine-shaped period (half or full cycle) is itself a "sinewave" having only a particular frequency (w/o any other bands).
It is not. The cycle becomes a sinewave w/o any extra-band harmonics only when repeated infinitely (ideally infinitely, in reality it does not have to be infinite but just long enough for the parasite harmonics to go below SNR).
Thus your theory is wrong right at its first step by expecting that you either don't have any distortion by chaining two sine-shaped cycles of different levels. Or that this instant X distortion is somehow lower than "N times X/N" distortion of smoothing, just because you preserve the shape of a single wave cycle.
(+ a few more mistakes at next steps).
Got it thanks.

So assumption that instead of destroying frequency content of each period, and half period for that matter - doing it once a half period would be less distortion is wrong?

Thinking is that 48 samples all differ from what a proper sine should be - would be worse - than differing once every 24 samples?

As far as I can tell - HF content is blurred more if moving each samples value - since intersamples between fundamentals is the HF content, the harmonics on that or fundamentals of other higher pitched instruments - if we are talking full music content and not just one sine.

So not getting the reasoning that you would move that into LF and below?

It got to be some technique you know of that daw makers don't.

Or is it costing to much cpu to do it right, so they cheat?

Thanks for input anyway, I'm learning every post....

User avatar
Max M.
KVRist
321 posts since 20 Apr, 2005 from Moscow, Russian Federation

Re: Does volume change of digital audio need to be done on zero crossing - not to produce artifacts?

Post Sat Mar 16, 2019 3:50 pm

lfm
On my Sonar it is something like 80 dB below signal, so that is fine.
But StudioOne is just 50 dB below doing the same thing.


This probably means they smooth with "control rate" being much lower than FS (i.e. instead of gradually changing the level for each sample they change it once per N samples then the distortion is much higher. This is quite common method to increase/"optimize" performance but... Not always a good thing. In fact this is quite close to what you propose except that it's not aligned to zero-crossings). I.e. "cheap" smoothing (And for comparable "low control rates" your method will generally be better).

So not getting the reasoning that you would move that into LF and below?

It's not that I would, it's just what the smoothing does mathematically. Roughly, an instant level change is basically a "click" (sometimes even a "spike"), having all the harmonics spread all over the frequency range. Smoothing (even if it's just a two-sample-long one) is mathematically equivalent to applying a lowpass filter to that click, so that it goes softer and softer as you increase the smoothing time (basically smoothing time -> "LP" cutoff).

lfm
KVRAF
4946 posts since 22 Jan, 2005 from Sweden

Re: Does volume change of digital audio need to be done on zero crossing - not to produce artifacts?

Post Sat Mar 16, 2019 11:21 pm

Thanks for taking the time and educate me a bit.

Some new ideas is that I will check with more frequency analyzers and EQ with good such options in daw and see what level are at harmonics generated. If Sonic Visualiser do it right in other words, that -50 dB is the levels.

Every multiple of fundamental frequency there is a band of crap and seems to be about same level too, I always suspected it to be in falling with each multiple. This is from a single timespot then, any chosen one.

Like from a square wave, harmonics level fall of each multiple of fundamental as doing Fourier transform of that.

Whatever a daw is doing, that it would fall into some natural declining levels or something.
This would also show as little spikes in eq visually at different frequencies in that kind of display.

If to approach developers at all with this it is good to have more than an educated guess there is a problem.

EDIT: Now looking at various EQ and frequency analyzers my conclusion is what Sonic Visualiser is showing is not anything close to reality - or what it suggest in various bands of partials at multiples of fundamental 1kHz.

Bluecat Audio free frequency analyzer go all the way to -120dB and there is some stuff down there, but nothing upfront like Sonic Visualiser suggest.

Much to late I did what I should have started with - check in other tools for frequency content. So whatever anybody tries - compare and verify with what normal eq and stuff do - showing content.

Return to “DSP and Plug-in Development”