Bit Depth vs Frequency Response - Reflection

Nowhk
KVRian

749 posts since 2 Oct, 2013
mystran wrote:
Nowhk wrote:Than: if calculating the FFT of the signal in that diagram will return a "signal x + error", why moving sample's magnitude casually around Y axis won't return the same "signal x + error"? Because signal x seems to change, really. Not only the noise...

The "error" is signal dependent unless the signal has sufficient broadband component (read: is noisy enough). Such "sufficient broadband component" is guaranteed when you add dither (although it's also possible for a signal to already be noisy enough to begin with), at which point the quantisation error will be broadband noise. From the engineering point of view, failure to dither properly is essentially a bug and that's why your result is (again from the engineering point of view) garbage. Throw in the dither and all the theory will suddenly make sense.

This doesn't entirely answer to my question.

Let me make an easier example.
Take a collection of N samples, at a sample rate that can express the signal's frequency I'm sampling.
Let say this samples collection express a SINEX (where X is the freq).
Once I change (a bit) the magnitude of each sample, I'll get the same SINEX + NOISE.
Change a little bit each sample of the collection again, and I'm still getting SINEX + more/plus noise (a different one).
And so on.

WHEN it will happens that the magnitude's deviation of some samples is so huge that the SINEX will disappair and substituted by another SINEY (i.e. another frequency) + noise?

It can't be that there is ALWAYS that SINEX + some noise, whatever sample's magnitudes I place within my collection of samples at the same sample rate.
Because this will means that whatever collection of samples I use will ALWAYS express SINEX + noise.
But I know that such collection can simply express SINEZ, maybe without noise (which has no relationship with SINEX and its noise).

So, what's the barrier where a SINEX + noise become another SINEY + noise, simply moving samples? There should be a relationship between samples, else the difference between two random collections of samples would only be "noise" (keeping the same SINEX "under the wood").

mystran
KVRAF

4948 posts since 11 Feb, 2006, from Helsinki, Finland
Nowhk wrote:WHEN it will happens that the magnitude's deviation of some samples is so huge that the SINEX will disappair and substituted by another SINEY (i.e. another frequency) + noise?

Never... unless your "noise" happens to look like sin(y). Given the nature of "random" it is certainly possible for a random process to produce such "noise" but the actually probability is vanishingly small, to the point where we can assume it never actually happens. Note that this is the practical answer, because the pedantic answer is that you can't have time and frequency at the same time.

But.. if you continue to randomly perturb the samples, then the signal-to-noise ratio will continue to fall until it gets low enough that it is no longer possible to reliably identify sin(x) and you effectively have just noise.

If you instead specifically pick permutations towards sin(y) then what you really get is some sort of cross-fade between the two frequencies, where (in the short-time sense) you will have both frequencies with varying amplitudes.
<- plugins | forum
Aleksey Vaneev
KVRAF

3486 posts since 7 Sep, 2002
mystran wrote:But.. if you continue to randomly perturb the samples, then the signal-to-noise ratio will continue to fall until it gets low enough that it is no longer possible to reliably identify sin(x) and you effectively have just noise.

I'll a bit side-track your discussion. Just wanted to give some input. The signal-to-noise ratio is very arbitrary or I would say an "impulse" concept. If you perform statistical noise estimation like what happens in FFT, the signal-to-noise ratio goes to infinity as the block size goes to infinity (given duration of signal also goes to infinity). So, in pratice when you assume a quiet sine signal at 2048 FFT vanishes, it can again be detected at 4096 FFT. It's tricky for understanding, but when you add dither noise, bit-reduction stops working as just bit reduction (if your signal falls below the least significant bit the signal vanishes).
Nowhk
KVRian

749 posts since 2 Oct, 2013
mystran wrote:If you instead specifically pick permutations towards sin(y) then what you really get is some sort of cross-fade between the two frequencies, where (in the short-time sense) you will have both frequencies with varying amplitudes.

Got it (I hope)
So the answer is "it can happens but the probabilities are pretty small" (and the whole case should be considered out of the equation)?

Thus, Bit Depth will in any case preserve the sinX (+ variable noise), even considering the worst/real case (i.e. 2 bit), since the introduced perturbations can't be so huge to put the signal towards another sinY (+ noise), but still keep it towards sinX (+ noise)?
Nowhk
KVRian

749 posts since 2 Oct, 2013
"right" mystran?
mystran
KVRAF

4948 posts since 11 Feb, 2006, from Helsinki, Finland
Nowhk wrote:So the answer is "it can happens but the probabilities are pretty small" (and the whole case should be considered out of the equation)?

If the signal being quantised has sufficiently "random" broadband content (ie. dither or something equivalent), then the probability of the distortion starting to look like some other deterministic signal is indistinguishable from zero and we can say with confidence that it will never happen.
<- plugins | forum
Previous

Moderator: Moderators (Main)