Bit Depth vs Frequency Response - Reflection

kamalmanzukie
KVRist

121 posts since 12 May, 2012
Nowhk wrote:Let me make two extreme (limit) examples.
Ideal scenario (no additional noise, precise number, and so on).

I'm generating a "digital" sine wave, using Sytrus, 1Khz.
Values goes from 1.0/-1.0 within the DAW.

Suppose my audio card is 16 bit. So it will range from 32,768 to +32,767, thus 1.0/-1.0 is going to be mapped to this range.

1/32,768 (0,000030517578125) would be the min peak I could reach. Below, it will truncate to 0.

0,000030517578125 = -90.30899869919435db (so, just to notice, my previous example above will fail; -95db will just truncate all samples to 0, and I'll hear nothing but noise).

So let say I move my normalized knob within Sytrus to 0,000030517578125.

Here's the two extreme examples:

1 - it could be that "all peaks" of the signal won't fall exactly at the sample position of for the samples rate "gird", so I could bump in a situation where my sine wave is totally lost (since all sample go to 0); it becomes just noise. I lost my original signal/sine wave. Is there any guaranteed threshold where the original signal is always preserved? Or it is "just a guess" for low amplitude signals, also depending by its frequency?

2 - what if I choose a X signal and a Y signal with the same 0,000030517578125 amplitude that, "casually", get the 100° sample valued, and the others go to 0. How can interpolation discriminate two signal between X and Y having the same valued sample? Or this situation can't happen?

basically, when you only have one bit your sine wave turns into a square wave. no interpolation, no nothing. and not a good bandlimited square wave, an actual square with harmonics going all the way to valhalla
mystran
KVRAF

4837 posts since 11 Feb, 2006, from Helsinki, Finland
Aleksey Vaneev wrote:It's all pretty complicated for understanding. SACD is 1-bit signal, so even with with 1-bit resolution you can have enormous SNR at chosen frequencies. Noise-shaping does that. IMO sampling theorem is incomplete without taking noise-shaping into account. Non-noise shaped signal is a special case of sampling.

The SACD-style 1-bit situation is actually even more complicated than conventional noise-shaping, which is relatively simple. Unfortunately as FabienTDR pointed out above, you can't properly dither a 1-bit signal, because you need at least 2 quantisation steps of headroom (for TPDF) to prevent saturation and the basic LTI treatment of noise-shaping actually runs in to same problem: now we are no longer shaping just the quantisation noise, but we're also trying to shape the additional distortion that results from saturation of the shaping process itself.

This can certainly produce useful results in practice (otherwise 1-bit converters wouldn't work), but theoretical treatment of the whole thing gets way more complicated than the relatively simple treatment of noise-shaping with adequate headroom. To be honest I'm not even sure what the current academic consensus on the subject is, since there are plenty of papers arguing back and forth whether or not this whole thing is actually well-founded or not.

So my two cents for someone learning about sampling and quantisation is to forget that 1-bit signals are even a possibility and just assume there is enough headroom to fit TPDF dither and whatever noise-shaping (if any) you want to do.

ps. in theory you should also add dither even when you're noise-shaping, in order to decorrelate the error signal, but .. yeah.. point is.. the 1-bit case is nasty.
<- plugins | forum
kamalmanzukie
KVRist

121 posts since 12 May, 2012
thats a whole other thing that what the op is trying to grapple with. sacd is that whole other type of overclocked encoding that ends up looking like a model of sound as compression and rarefaction, no?

i think maybe op is confused from that digital show and tell where he says the only difference in bit rate is added noise. this is only true if you dither, which means to add noise. without this, the sine becomes more like a square, which is a difference in the frequency response i suppose but the fundamental stays the same
Nowhk
KVRian

743 posts since 2 Oct, 2013
kamalmanzukie wrote:i think maybe op is confused from that digital show and tell where he says the only difference in bit rate is added noise. this is only true if you dither, which means to add noise. without this, the sine becomes more like a square, which is a difference in the frequency response i suppose but the fundamental stays the same

I think I've choosed the wrong approch to try fixing my doubt Maybe its better get rid of bit depth, and see it from a samplerate/interpolation point of view:

If I look at the spectrum of that yellow signal, I would see the partials that constitute the green signal + some noise floor (i.e. the partials that constitute the red signal).

Now my question: if I truncate 80% of those samples to 0 (or even keeping only 1 point per period of that signal), would the spectrum show the partials that constitute the green signal + more noise floor anymore?

You said "yes it will". Me said "conceptually I can't see how this can be accomplished". Having only 1 point per period seems to me not enough to interpolate a yellow signal that kept the original partials of the green signal. Nyquist said we need a number of sample double the frequency to build-back a signal. But it doesn't talk about the "quality" of samples. Is it just irrelevant? So I can just keep 1 sample and the other can do whatever they want?

Any tool that can help me to "visually" show up? Placing for example an array of samples and drawing the resulting Whittaker–Shannon interpolated signal? I'm trying to FlowStone meanwhile...
mystran
KVRAF

4837 posts since 11 Feb, 2006, from Helsinki, Finland
Nowhk wrote:Now my question: if I truncate 80% of those samples to 0 (or even keeping only 1 point per period of that signal), would the spectrum show the partials that constitute the green signal + more noise floor anymore?

There are really two things going on when you turn an analog signal into a digital signal. First, there is time-quantisation (ie. "sampling") where the (average, in case it's irregular) sampling rate determines the available bandwidth. Then there is amplitude-quantisation (often called just "quantisation") where we round the signal values to the closest representable finite-precision number. Mathematically at least, you can do either of these without doing the other.

As far as truncating a certain number of samples to zero, that means lowering the effective sampling rate, which reduces the amount of bandwidth you can represent. This actually doesn't even introduce any noise as long as the signal fits into the new bandwidth (although in practice it's tricky to reconstruct irregularly sampled signals perfectly) and if it doesn't then you get aliasing.

The point though: do not get confused by the distinction between (amplitude-)quantisation and sampling (ie. time-quantisation). Your digital signals are subject to both, but each produces different artifacts.
<- plugins | forum
BertKoor
KVRAF

10345 posts since 8 Mar, 2005, from Utrecht, Holland
We are the KVR collective. Resistance is futile. You will be assimilated.
My MusicCalc is back online!!
Nowhk
KVRian

743 posts since 2 Oct, 2013
mystran wrote:As far as truncating a certain number of samples to zero, that means lowering the effective sampling rate, which reduces the amount of bandwidth you can represent.

Thats the weirdness I'm reflecting about So if I low the amplitude of a signal I'm sampling (given any bit depth) it automatically reduce the bandwidth of the signal that will be reconstruct? (since more part of the signal's samples will be truncated)

Sorry if you feel this. Just trying to see the doubt from different (related) point of view. Some stuff are not linear in understanding.
mystran
KVRAF

4837 posts since 11 Feb, 2006, from Helsinki, Finland
Nowhk wrote:
mystran wrote:As far as truncating a certain number of samples to zero, that means lowering the effective sampling rate, which reduces the amount of bandwidth you can represent.

Thats the weirdness I'm reflecting about So if I low the amplitude of a signal I'm sampling (given any bit depth) it automatically reduce the bandwidth of the signal that will be reconstruct?

Well not really "automatically" because you will also need to modify the reconstruction algorithm to skip the samples that you removed. If you just feed them to a regular reconstruction algorithm that expect them to be valid sample values, then you have a process well-described by "garbage in -> garbage out."
<- plugins | forum
Nowhk
KVRian

743 posts since 2 Oct, 2013
mystran wrote:Well not really "automatically" because you will also need to modify the reconstruction algorithm to skip the samples that you removed. If you just feed them to a regular reconstruction algorithm that expect them to be valid sample values, then you have a process well-described by "garbage in -> garbage out."

Why should I change the reconstruction algorithm? I'm about to observe the result using the same, else of course it could change.

I've mounted a tool using FlowStone; using the Sync Interpolation (thanks to martinvicanek for the module), sample rate 16 samples, 2hz signal, that's the result:

001-original.png

Here's the list of sampled values:

Code: Select all
`00.70710710.7071071.22465e-016-0.707107-1-0.70710700.70710710.7071071.22465e-016-0.707107-1-0.707107`

Now. If I "emulate" to feed the same signal with lower amplitude, where some sample will be truncated due to their low level (so there isn't enough bit to represent it, truncating them to 0), that's the result:

002-truncation.png

Here's the list of samples value (note I've truncated the ones below 1/-1, as for example):

Code: Select all
`001000-10001000-10`

The question: can I be sure that within this resulting signal there's ALSO the 2hz signal I feed in?
Yes, no or it depends?
You do not have the required permissions to view the files attached to this post.
mystran
KVRAF

4837 posts since 11 Feb, 2006, from Helsinki, Finland
Nowhk wrote:The question: can I be sure that within this resulting signal there's ALSO the 2hz signal I feed in?
Yes, no or it depends?

Assuming you set the samples to zero in a regular fashion, keeping every Nth sample, then this is equivalent to first downsampling by a factor of N (which will result in aliasing if you violate the Nyquist condition) and then zero-padding back to the original rate (which will create mirror-replicates of the base-band spectrum).

None of this has absolutely nothing to do with quantisation though.
<- plugins | forum
Nowhk
KVRian

743 posts since 2 Oct, 2013
mystran wrote:Assuming you set the samples to zero in a regular fashion, keeping every Nth sample, then this is equivalent to first downsampling by a factor of N (which will result in aliasing if you violate the Nyquist condition) and then zero-padding back to the original rate (which will create mirror-replicates of the base-band spectrum).

It was just an example. Nth samples are just casual. I'll provide another "irregular example"

mystran wrote:None of this has absolutely nothing to do with quantisation though.

It should if the truncation is due to the few available bits on quantisation, when the signal has a low amplitude and thus huge part of its "body" is below the last bit. I'll provide a more concise example

Give me some times! Meanwhile, thanks for the support, as usual!
Aleksey Vaneev
KVRAF

3367 posts since 7 Sep, 2002
mystran wrote:
Aleksey Vaneev wrote:It's all pretty complicated for understanding. SACD is 1-bit signal, so even with with 1-bit resolution you can have enormous SNR at chosen frequencies. Noise-shaping does that. IMO sampling theorem is incomplete without taking noise-shaping into account. Non-noise shaped signal is a special case of sampling.

The SACD-style 1-bit situation is actually even more complicated than conventional noise-shaping, which is relatively simple. Unfortunately as FabienTDR pointed out above, you can't properly dither a 1-bit signal, because you need at least 2 quantisation steps of headroom

Well, I have not tried to implement 1-bit noise-shaped dither myself, but I do not see how it's impossible - you just keep the input signal e.g. at -6 dBFS and thus noise at a similar level. 16-bit noise-shaped dithering also creates saturation - it's just rare as usually masters are kept at -0.3 dBFS so that added noise never saturates. 1-bit case is extreme, but it's in no way special than e.g. 16-bit case.
mystran
KVRAF

4837 posts since 11 Feb, 2006, from Helsinki, Finland
Aleksey Vaneev wrote:Well, I have not tried to implement 1-bit noise-shaped dither myself, but I do not see how it's impossible - you just keep the input signal e.g. at -6 dBFS and thus noise at a similar level. 16-bit noise-shaped dithering also creates saturation - it's just rare as usually masters are kept at -0.3 dBFS so that added noise never saturates. 1-bit case is extreme, but it's in no way special than e.g. 16-bit case.

First of all, for effective dither you need TPDF with magnitude of 2 quantisation steps (eg. [0, 2]) and that won't fit in 1-bit even without any signal. The other "special" thing about 1-bit case is that you don't just reach saturation, you're constantly stuck with it, because you're always in saturation one way. In fact, your filter (assuming some sort of high-pass response at least) cannot even decay below a full-scale limit-cycle oscillation!

In practice delta-sigma converters and class-D amplifiers certainly work, there's no question about that... but the point is that the theoretical treatment is a mess and it's not all that straight-forward to come up with any hard guarantees about the resulting quality. In comparison, with adequate headroom in a multi-bit system you just add dither, then you have the original signal and uncorrelated additive noise and then you can take the error and put an LTI filter into the loop and you'll end up with the noise shaped to match the filter response.
<- plugins | forum
earlevel
KVRist

449 posts since 4 Apr, 2010
Nowhk wrote:I've seen recently...The bit depth has no impact on the frequency response, which is constrained by the sample rate.[/i].

Let me make an example to show you my doubt.

...The available samples for sampling that signal within the SNR are limited now (I'm near the noise floor), so most of them will be "truncated" at 0.

...So, is it correct to state that "The bit depth has no impact on the frequency response"? Because if I reach a point where the few samples can't recreate the signal (even if I'm within the available dynamic range), that state becomes weird.

Hi Nowhk,

Quantization introduces error in the rendering of a signal—it doesn't change the signal itself, so no change in frequency response.

Say I have a front door with a peephole lens, facing my driveway. A curved lens distorts what I might see, but close enough. But let's say I live on a planet without glass, so we use a crystalline structure that that can't be ground smooth, but if you cleave enough flat facets on it, it works out pretty much the same. Through it, I see my friend pull up in a red Corvette (because this planet is still pretty cool).

My peephole breaks one day, slammed the door too hard, I'm between paychecks and replace it with a cheapie for now, with few facets and a lot of optical error. I'm expecting my friend, hear some rumble, look out the peephole and see a red blob. It could be a red Yaris, who knows.

Assuming it's indeed my friend, is it still a Corvette? Yes, my distorted view didn't change that—I'm viewing a red Corvette plus error.

It's all point of view, sure. You could consider a horrifically bad DAC as a signal processor, and comment about how the signal is now different after running through it. But a DAC's purpose is accuracy, so it makes sense to separate its error from the signal. Similarly, you could imagine a DC signal and a dynamically changing bit depth as potentially changing that 0Hz to a different frequency. But that's not what we're talking about when we talk about bit depth, so you'd be gaming the explanation.
My audio DSP blog: earlevel.com
Nowhk
KVRian

743 posts since 2 Oct, 2013
earlevel wrote:Quantization introduces error in the rendering of a signal—it doesn't change the signal itself, so no change in frequency response.

I'm trying to understand if maybe we would say "it doesn't change the signal itself in most/real cases", not always. I'm reasoning theoretically.

Here's an attemp trying to do a Sync Interpolation after a Bit Reducer (such as mapping -1.0/1.0 to N bit), -10db, 2Hz, 16 Bit sample rate, Bit Depth 2 bit:

test_sync.png

Visually, it seems to me that the input signal (2hz) is vanished. But I could be wrong.
So quantization (probably) do mess the original 2hz signal's frequency.

Of course its an extreme example, case-limit, and it wouldn't happens in any recent/decent/real DAC/situation (as you said: a DAC's purpose is accuracy, so it makes sense to separate its error from the signal).

But that wasn't my question, was more (as I said) theoretically, not practical
You do not have the required permissions to view the files attached to this post.
PreviousNext

Moderator: Moderators (Main)