- KVRian
- 764 posts since 2 Oct, 2013

Hi there,

I've seen recently this claim on wiki about Bit Depth, which state that The bit depth limits the signal-to-noise ratio (SNR) of the reconstructed signal to a maximum level determined by quantization error. The bit depth has no impact on the frequency response, which is constrained by the sample rate..

Let me make an example to show you my doubt.

Say I've a 16-bit of Bit Depth system, sampling a 1Khz Sine Wave, using a sufficient Sample Rate to preserve the signal (2000hz is enough in this case, but feel free to use 44khz if you prefer).

Now, If I play this Sine Full Scale and I see the resulting spectum, I would see the preserved Sine Wave at 1Khz plus the noisefloor around -96db. That's ok.

Now, let say i play the same Sine, but at -95db this time (still within the dynamic range of a 16 bit system though).

What does it happens? The available samples for sampling that signal within the SNR are limited now (I'm near the noise floor), so most of them will be "truncated" at 0.

Is it correct to stay that the result (if I watch again at the spectrum) is still 1Khz plus the noisefloor around -96db? I really dubt that interpolation will kept a sine of 1khz (if hypothetically I will isolate it from the noise). Probably the frequency its changed as well.

Or maybe not! So, is it correct to state that "The bit depth has no impact on the frequency response"? Because if I reach a point where the few samples can't recreate the signal (even if I'm within the available dynamic range), that state becomes weird.

Hope the reasoning make sense

Thanks!

I've seen recently this claim on wiki about Bit Depth, which state that The bit depth limits the signal-to-noise ratio (SNR) of the reconstructed signal to a maximum level determined by quantization error. The bit depth has no impact on the frequency response, which is constrained by the sample rate..

Let me make an example to show you my doubt.

Say I've a 16-bit of Bit Depth system, sampling a 1Khz Sine Wave, using a sufficient Sample Rate to preserve the signal (2000hz is enough in this case, but feel free to use 44khz if you prefer).

Now, If I play this Sine Full Scale and I see the resulting spectum, I would see the preserved Sine Wave at 1Khz plus the noisefloor around -96db. That's ok.

Now, let say i play the same Sine, but at -95db this time (still within the dynamic range of a 16 bit system though).

What does it happens? The available samples for sampling that signal within the SNR are limited now (I'm near the noise floor), so most of them will be "truncated" at 0.

Is it correct to stay that the result (if I watch again at the spectrum) is still 1Khz plus the noisefloor around -96db? I really dubt that interpolation will kept a sine of 1khz (if hypothetically I will isolate it from the noise). Probably the frequency its changed as well.

Or maybe not! So, is it correct to state that "The bit depth has no impact on the frequency response"? Because if I reach a point where the few samples can't recreate the signal (even if I'm within the available dynamic range), that state becomes weird.

Hope the reasoning make sense

Thanks!

- KVRAF
- 10563 posts since 8 Mar, 2005, from Utrecht, Holland

Do this thought experiment: swap the least significant bit for the most significant bit. So you have a bit depth of one (extreme bitcrushed) and the DAC still has its own noise floor at -96dB.

Now what do you think the frequency response is of such a setup? Is there a difference between the frequency and its amplitude? What are the limits for frequencies?

What means are there to produce sounds softer than the singe one full-scale bit?

Now what do you think the frequency response is of such a setup? Is there a difference between the frequency and its amplitude? What are the limits for frequencies?

What means are there to produce sounds softer than the singe one full-scale bit?

We are the KVR collective. Resistance is futile. You will be assimilated.

My MusicCalc is back online!!

My MusicCalc is back online!!

- KVRian
- 976 posts since 23 Feb, 2012

What exactly is the question?

IMHO most missunderstandings come from ignoring the bandlimiting aspect of the sampling theorem.

I.e. both input and output of a digital system is perfectly continuous.

1bit 10hz SR does exactly what it promises to offer according to the theorem. The only problem 1 bit in particular can't be dithered properly. 2 bits or more is needed for dithering to work. In that sense, 1 bit is a special case.

There are analogue Nyquist filters involved in the process. These are the "means that are there to produce sounds softer (or higher level) than the singe one full-scale bit"

IMHO most missunderstandings come from ignoring the bandlimiting aspect of the sampling theorem.

I.e. both input and output of a digital system is perfectly continuous.

1bit 10hz SR does exactly what it promises to offer according to the theorem. The only problem 1 bit in particular can't be dithered properly. 2 bits or more is needed for dithering to work. In that sense, 1 bit is a special case.

There are analogue Nyquist filters involved in the process. These are the "means that are there to produce sounds softer (or higher level) than the singe one full-scale bit"

Last edited by FabienTDR on Thu May 03, 2018 9:28 am, edited 5 times in total.

Fabien from Tokyo Dawn Records

Check out my audio processors over at the Tokyo Dawn Labs!

Check out my audio processors over at the Tokyo Dawn Labs!

- KVRian
- 764 posts since 2 Oct, 2013

Hi BertKoor! Hope is all ok for you! Long time is passed...

Not sure I follow your reasoning.

But are you saying that sampling the same sine wave, but with different amplitudes, will (or better, "could") change its frequency?

I would aspect (for a 16 Bit) that sampling a Full Scale 1Khz Sine Wave produce 1Khz Sine Wave + noise, as for a -50db 1Khz Sine Wave produce 1Khz Sine Wave (with less amplitude) + noise, and the same for a -95db 1Khz Sine Wave.

But if there is no available samples below -96.33 dB, I dubt it will happens (since most of them will become 0, and later the interpolation probably will fail).

Or maybe I misunderstand this step, and in fact it just sample the whole "sine" correctly and -96.33 dB become (after interpolation) just the threshold where we start to discriminate "sound" content (and below, it contains the noise created from quantization).

But so the question: what if I input a signal that is "barely" above the last available sample?

For example, let say I have this sine (Bit Depth 4 Bit, 1Khz, even if its not specified in the image, 2000hz or more of sampling rate):

If I look at its spectrum, I think I will see that 1khz sine + noise.

But what if I change its amplitude to a value that "hit/peak 0.25" in that graph?

Most of its sample (in the "body" of the signal) will become 0, due to truncation/quantization.

Does interpolation (later) be able to keep a 1khz sine + noise or will "fail" and its frequency will also change? That's the question.

I ask, I don't know

I would try myself, but I don't know any software/tools that can help me on learning this kind of stuff

BertKoor wrote:Do this thought experiment: swap the least significant bit for the most significant bit. So you have a bit depth of one (extreme bitcrushed) and the DAC still has its own noise floor at -96dB.

Now what do you think the frequency response is of such a setup? Is there a difference between the frequency and its amplitude? What are the limits for frequencies?

What means are there to produce sounds softer than the singe one full-scale bit?

Not sure I follow your reasoning.

But are you saying that sampling the same sine wave, but with different amplitudes, will (or better, "could") change its frequency?

I would aspect (for a 16 Bit) that sampling a Full Scale 1Khz Sine Wave produce 1Khz Sine Wave + noise, as for a -50db 1Khz Sine Wave produce 1Khz Sine Wave (with less amplitude) + noise, and the same for a -95db 1Khz Sine Wave.

But if there is no available samples below -96.33 dB, I dubt it will happens (since most of them will become 0, and later the interpolation probably will fail).

Or maybe I misunderstand this step, and in fact it just sample the whole "sine" correctly and -96.33 dB become (after interpolation) just the threshold where we start to discriminate "sound" content (and below, it contains the noise created from quantization).

But so the question: what if I input a signal that is "barely" above the last available sample?

For example, let say I have this sine (Bit Depth 4 Bit, 1Khz, even if its not specified in the image, 2000hz or more of sampling rate):

If I look at its spectrum, I think I will see that 1khz sine + noise.

But what if I change its amplitude to a value that "hit/peak 0.25" in that graph?

Most of its sample (in the "body" of the signal) will become 0, due to truncation/quantization.

Does interpolation (later) be able to keep a 1khz sine + noise or will "fail" and its frequency will also change? That's the question.

I ask, I don't know

I would try myself, but I don't know any software/tools that can help me on learning this kind of stuff

- KVRian
- 976 posts since 23 Feb, 2012

Have you seen this already:

https://www.youtube.com/watch?v=cIQ9IXSUzuM

A digital sampling system looks like this:

[bandlimiting (analogue)]->[digital representation]->[bandlimiting (analogue)]

Input and output of such a system is always continuous (free of steps).

I am under the impression that you ignore the last element above (the DA convertor), and wonder why it doesn't seem to make sense. You can't simply cut the theorem in half and wonder why the world falls apart.

https://www.youtube.com/watch?v=cIQ9IXSUzuM

A digital sampling system looks like this:

[bandlimiting (analogue)]->[digital representation]->[bandlimiting (analogue)]

Input and output of such a system is always continuous (free of steps).

I am under the impression that you ignore the last element above (the DA convertor), and wonder why it doesn't seem to make sense. You can't simply cut the theorem in half and wonder why the world falls apart.

Fabien from Tokyo Dawn Records

Check out my audio processors over at the Tokyo Dawn Labs!

Check out my audio processors over at the Tokyo Dawn Labs!

- KVRAF
- 10563 posts since 8 Mar, 2005, from Utrecht, Holland

I didn't mean to suggest anything, just an attempt to provoke you to think about the subject yourselfNowhk wrote:are you saying that sampling the same sine wave, but with different amplitudes, will (or better, "could") change its frequency?

Me too, indeed.Nowhk wrote:I would aspect (for a 16 Bit) that sampling a Full Scale 1Khz Sine Wave produce 1Khz Sine Wave + noise, as for a -50db 1Khz Sine Wave produce 1Khz Sine Wave (with less amplitude) + noise, and the same for a -95db 1Khz Sine Wave.

That is the same problem as how can you print greyscale images in the paper with only pitch-black ink? The answer is Dithering: add a tiny bit of noise to the signal before the sampling:Nowhk wrote:But if there is no available samples below -96.33 dB, I dubt it will happens (since most of them will become 0, and later the interpolation probably will fail).

[...]

But so the question: what if I input a signal that is "barely" above the last available sample?

Simple truncation would give you this:

and after the damage is done, you cannot undo it.

We are the KVR collective. Resistance is futile. You will be assimilated.

My MusicCalc is back online!!

My MusicCalc is back online!!

- KVRAF
- 6857 posts since 16 Feb, 2005

Nowhk wrote:Does interpolation (later) be able to keep a 1khz sine + noise or will "fail" and its frequency will also change? That's the question.

Most likely the sine tone will be lost in harmonic distortion, as well as quantization noisefloor. It won't necessarily change frequency or anything, but the main problem will be amplitude distortion of the original sinewave. There could also be something like sum and difference frequencies appearing, possibly.

- KVRian
- 764 posts since 2 Oct, 2013

Of course dude That Monty doggy make me laugh everytime haha! But I'm asking somethings not covered by that video. I know I'm talking about bandlimited signal, and in fact I've specified the sample rate. That's not the question.

BertKoor wrote:Me too, indeed.Nowhk wrote:I would aspect (for a 16 Bit) that sampling a Full Scale 1Khz Sine Wave produce 1Khz Sine Wave + noise, as for a -50db 1Khz Sine Wave produce 1Khz Sine Wave (with less amplitude) + noise, and the same for a -95db 1Khz Sine Wave.

But is what really happens?

Using Monty words: there's only one single signal that pass trought every sample. Interpolation don't draw "whatever it wants".

BUT!

Sure that this happens if you truncate most of its samples? I don't mean "move them" a bit due to quantization. But literally truncate it, to 0.

Lets take 1 seconds of our 1khz sine wave full scale, so 2000 samples.

These values will fluctuate from −32,768 to +32,767, at 16 bit.

Now, if I decrease the amplitude drastically (-95db, as said in the example), most of the 2000 values will truncate to 0. Let say (for example) that 1200 samples will go to 0 (and only some "peak" are kept).

Is it still true that this digital signal (that take only some valuable samples, due to truncation), once interpolated, keep that 1khz sine (even if awashed in the noise)?

How can interpolation keep the original sine having smashed most its component (samples) to 0?

Mentally I see lots of noise and the 1khz vanished, or with different frequency. Because even if the sample rate is kept, there are too much loss data that "would describe that signal" in some way.

Its just a puzzlement

- KVRAF
- 3501 posts since 7 Sep, 2002

It's all pretty complicated for understanding. SACD is 1-bit signal, so even with with 1-bit resolution you can have enormous SNR at chosen frequencies. Noise-shaping does that. IMO sampling theorem is incomplete without taking noise-shaping into account. Non-noise shaped signal is a special case of sampling.

- KVRist
- 145 posts since 12 May, 2012

Nowhk wrote:

Of course dude That Monty doggy make me laugh everytime haha! But I'm asking somethings not covered by that video. I know I'm talking about bandlimited signal, and in fact I've specified the sample rate. That's not the question.BertKoor wrote:Me too, indeed.Nowhk wrote:I would aspect (for a 16 Bit) that sampling a Full Scale 1Khz Sine Wave produce 1Khz Sine Wave + noise, as for a -50db 1Khz Sine Wave produce 1Khz Sine Wave (with less amplitude) + noise, and the same for a -95db 1Khz Sine Wave.

But is what really happens?

Using Monty words: there's only one single signal that pass trought every sample. Interpolation don't draw "whatever it wants".

BUT!

Sure that this happens if you truncate most of its samples? I don't mean "move them" a bit due to quantization. But literally truncate it, to 0.

Lets take 1 seconds of our 1khz sine wave full scale, so 2000 samples.

These values will fluctuate from −32,768 to +32,767, at 16 bit.

Now, if I decrease the amplitude drastically (-95db, as said in the example), most of the 2000 values will truncate to 0. Let say (for example) that 1200 samples will go to 0 (and only some "peak" are kept).

Is it still true that this digital signal (that take only some valuable samples, due to truncation), once interpolated, keep that 1khz sine (even if awashed in the noise)?

How can interpolation keep the original sine having smashed most its component (samples) to 0?

Mentally I see lots of noise and the 1khz vanished, or with different frequency. Because even if the sample rate is kept, there are too much loss data that "would describe that signal" in some way.

Its just a puzzlement

yes it will be the same frequency. what are you smoking? the only way it would change would be if it got aliased somehow.

it could truncate every single sample except for one every period and still keep the same fundamental frequency. and if that one got truncated once in a while too it wouldn't change the frequency it would add noise

- KVRian
- 764 posts since 2 Oct, 2013

kamalmanzukie wrote:yes it will be the same frequency. what are you smoking?

I'm just a DSP noob afterall, not an expert.

kamalmanzukie wrote:the only way it would change would be if it got aliased somehow.

it could truncate every single sample except for one every period and still keep the same fundamental frequency. and if that one got truncated once in a while too it wouldn't change the frequency it would add noise

How can it do it? I mean, it could be noise + any frequency having only 1 sample. Why it become 1khz and not (for example) 876hz? Its also due to the samples values it reckon which freq theres on the signal.

If in 2000 samples only 1 is valorized, for example, how can discriminate it from another signal at 876hz that would contain another samples spread alongside 2000?

It seems to me black magic Sure it isn't, but I can't see it in my mind...

- KVRian
- 764 posts since 2 Oct, 2013

Let me make two extreme (limit) examples.

Ideal scenario (no additional noise, precise number, and so on).

I'm generating a "digital" sine wave, using Sytrus, 1Khz.

Values goes from 1.0/-1.0 within the DAW.

Suppose my audio card is 16 bit. So it will range from 32,768 to +32,767, thus 1.0/-1.0 is going to be mapped to this range.

1/32,768 (0,000030517578125) would be the min peak I could reach. Below, it will truncate to 0.

0,000030517578125 = -90.30899869919435db (so, just to notice, my previous example above will fail; -95db will just truncate all samples to 0, and I'll hear nothing but noise).

So let say I move my normalized knob within Sytrus to 0,000030517578125.

Here's the two extreme examples:

1 - it could be that "all peaks" of the signal won't fall exactly at the sample position of for the samples rate "gird", so I could bump in a situation where my sine wave is totally lost (since all sample go to 0); it becomes just noise. I lost my original signal/sine wave. Is there any guaranteed threshold where the original signal is always preserved? Or it is "just a guess" for low amplitude signals, also depending by its frequency?

2 - what if I choose a X signal and a Y signal with the same 0,000030517578125 amplitude that, "casually", get the 100° sample valued, and the others go to 0. How can interpolation discriminate two signal between X and Y having the same valued sample? Or this situation can't happen?

Ideal scenario (no additional noise, precise number, and so on).

I'm generating a "digital" sine wave, using Sytrus, 1Khz.

Values goes from 1.0/-1.0 within the DAW.

Suppose my audio card is 16 bit. So it will range from 32,768 to +32,767, thus 1.0/-1.0 is going to be mapped to this range.

1/32,768 (0,000030517578125) would be the min peak I could reach. Below, it will truncate to 0.

0,000030517578125 = -90.30899869919435db (so, just to notice, my previous example above will fail; -95db will just truncate all samples to 0, and I'll hear nothing but noise).

So let say I move my normalized knob within Sytrus to 0,000030517578125.

Here's the two extreme examples:

1 - it could be that "all peaks" of the signal won't fall exactly at the sample position of for the samples rate "gird", so I could bump in a situation where my sine wave is totally lost (since all sample go to 0); it becomes just noise. I lost my original signal/sine wave. Is there any guaranteed threshold where the original signal is always preserved? Or it is "just a guess" for low amplitude signals, also depending by its frequency?

2 - what if I choose a X signal and a Y signal with the same 0,000030517578125 amplitude that, "casually", get the 100° sample valued, and the others go to 0. How can interpolation discriminate two signal between X and Y having the same valued sample? Or this situation can't happen?

- KVRAF
- 10563 posts since 8 Mar, 2005, from Utrecht, Holland

It's simple what you describe, and exactly why you should DITHER whenever you reduce bit depth. VSTs are 24 bit, cd's are 16 bit, so you dither in order to reduce truncation artefacts: the black/white image instead of greyscale.

We are the KVR collective. Resistance is futile. You will be assimilated.

My MusicCalc is back online!!

My MusicCalc is back online!!

- KVRist
- 145 posts since 12 May, 2012

BertKoor wrote:It's simple what you describe, and exactly why you should DITHER whenever you reduce bit depth. VSTs are 24 bit, cd's are 16 bit, so you dither in order to reduce truncation artefacts: the black/white image instead of greyscale.

which brings me to my favorite quote of the internet:

“The thing is that there is actually no difference between digital and analogue signals – all have a dynamic range set by the ratio between the max level and noise. The difference is that analogue comes with it’s own noise (caused by the reality of signal in the physical world) whereas any digital representation in math requires us to re-insert the physical random component the math does not provide us.

It is a theoretical requirement of the system, it doesn’t mask the distortion – it removes it… ANY digital data representation of a signal in the real world has artificial certainty (which reality doesn’t) and it has to be removed for the signal to be harmonically accurate – i.e. like a signal in the real world… It’s a deep subject that shows our math is an artificial human approximation of reality – but the approximiation has too much certainty. Fascinating implications to that concept…”

- Paul Frindle