I've seen recently this claim on wiki about Bit Depth, which state that

*The bit depth limits the signal-to-noise ratio (SNR) of the reconstructed signal to a maximum level determined by quantization error. The bit depth has no impact on the frequency response, which is constrained by the sample rate.*.

Let me make an example to show you my doubt.

Say I've a 16-bit of Bit Depth system, sampling a 1Khz Sine Wave, using a sufficient Sample Rate to preserve the signal (2000hz is enough in this case, but feel free to use 44khz if you prefer).

Now, If I play this Sine Full Scale and I see the resulting spectum, I would see the preserved Sine Wave at 1Khz plus the noisefloor around -96db. That's ok.

Now, let say i play the same Sine, but at -95db this time (still within the dynamic range of a 16 bit system though).

What does it happens? The available samples for sampling that signal within the SNR are limited now (I'm near the noise floor), so most of them will be "truncated" at 0.

Is it correct to stay that the result (if I watch again at the spectrum) is still 1Khz plus the noisefloor around -96db? I really dubt that interpolation will kept a sine of 1khz (if hypothetically I will isolate it from the noise). Probably the frequency its changed as well.

Or maybe not! So, is it correct to state that "

**The bit depth has no impact on the frequency response**"? Because if I reach a point where the few samples can't recreate the signal (even if I'm within the available dynamic range), that state becomes weird.

Hope the reasoning make sense

Thanks!