Bit Depth vs Frequency Response - Reflection

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Nowhk wrote:
test_sync.png
Visually, it seems to me that the input signal (2hz) is vanished. But I could be wrong.
So quantization (probably) do mess the original 2hz signal's frequency.
You are NOT doing (what everyone else calls) bit-reduction here. Bit-reduction is when you reduce the number of bits used to represent a sample. Instead you are (logically) decimating the signal to 1/4th of the sampling rate and then zero-stuffing it back to the original rate. The decimation step will cause anything above 1/8th of the sampling rate to alias down, but assuming the signal bandwidth is low enough to fit, it can be reconstructed by sinc-interpolation... except you need the sinc-cutoff at 1/8th of the sampling rate (instead of half sampling-rate like your picture above) because you are logically upsampling.

But again.. none of this has anything to do with bit-depth and the sooner you admit that the sooner you'll get somewhere.

Post

mystran wrote:You are NOT doing (what everyone else calls) bit-reduction here. Bit-reduction is when you reduce the number of bits used to represent a sample. Instead you are (logically) decimating the signal to 1/4th of the sampling rate and then zero-stuffing it back to the original rate. The decimation step will cause anything above 1/8th of the sampling rate to alias down, but assuming the signal bandwidth is low enough to fit, it can be reconstructed by sinc-interpolation... except you need the sinc-cutoff at 1/8th of the sampling rate (instead of half sampling-rate like your picture above) because you are logically upsampling.

But again.. none of this has anything to do with bit-depth and the sooner you admit that the sooner you'll get somewhere.
:o Probably its not clear the schematic above.
I'm not touching sampling rate at all.

To avoid misunderstanding, let me re-explain (even if I'll look like an idiot :dog: )
Bit reducer:

Code: Select all

lowBitNumber = pow(nBit - 23, 2) * 3;
lowBitSignal = signal + lowBitNumber - lowBitNumber;
Sample Rate = 16 (fixed)
Blue Signal = 2hz, -10db (input "analog" signal)
Red signal = Sync Interpolation after sampling the Blue Signal (output)

Bit Depth 8 bit:
001-8bit.png
Bit Depth 4 bit:
002-4bit.png
Bit Depth 3 bit:
003-3bit.png
Bit Depth 2 bit:

(the image above).

Is it the same of what you though? Isn't the Bit Depth that change the (red) signal here? Due to the range it can grant for the input signal when sampling?

"I know that I know nothing"
You do not have the required permissions to view the files attached to this post.

Post

Nowhk wrote:To avoid misunderstanding, let me re-explain (even if I'll look like an idiot :dog: )
Ok, so these pictures look more like what you'd expect from bit-depth reduction. What you are observing is the harmonic distortion that results when you do not dither. If you now add TPDF dither (ie. add two uniform random numbers each scaled to one quantization step before doing the bit-reduction; note that you really want at least a reasonably high quality PRNG for this), you will instead see some noise instead, although it will now be closer to the original signal on average.

If you then look at the spectrum (eg. using FFT analyzer; use long enough window) of the bit-reduced signal with and without dither, you'll notice that the version that the version without dither will have harmonic distortion (the bulk of which will alias, creating an uneven "noise floor" which looks more or less like a bunch of random peaks) while the version with dither will have just the original signal frequencies, together with a flat, white noise floor. With less bits, the noise floor will be higher, but otherwise the signal will be preserved as-is.

Post

Nowhk wrote: Visually, it seems to me that the input signal (2hz) is vanished. But I could be wrong.
So quantization (probably) do mess the original 2hz signal's frequency.
Again—to put it a different way, you're arguing semantics. When we talk about quantizing, we're talking about fitting a value in an available (or allotted) number of bits. Because we need to (or want to). The cost for less space is more error. In that sense, the signal that you show is still the original signal, plus error that makes it appear to no longer be that original signal.

In other words, if you're doing this as a processing block, OK, it does what it does. But when we're talking about the effects of sample rate and bit depth, that's implying a very specific thing—how well the original signal can be encoded and reproduced. And there we are talking about bandwidth and quantization error.

That's not to say your discussion doesn't have merits in pondering what the effects are in constraining quantization to the point that you're really discussing a certain kind of non-linear processing (a pretty unpleasant one). But that's an entirely different thing than questioning whether it's correct to say "The bit depth has no impact on the frequency response", while taking the claim out of context. ;-)
My audio DSP blog: earlevel.com

Post

mystran wrote:
Aleksey Vaneev wrote: Well, I have not tried to implement 1-bit noise-shaped dither myself, but I do not see how it's impossible - you just keep the input signal e.g. at -6 dBFS and thus noise at a similar level. 16-bit noise-shaped dithering also creates saturation - it's just rare as usually masters are kept at -0.3 dBFS so that added noise never saturates. 1-bit case is extreme, but it's in no way special than e.g. 16-bit case.
First of all, for effective dither you need TPDF with magnitude of 2 quantisation steps (eg. [0, 2]) and that won't fit in 1-bit even without any signal. The other "special" thing about 1-bit case is that you don't just reach saturation, you're constantly stuck with it, because you're always in saturation one way. In fact, your filter (assuming some sort of high-pass response at least) cannot even decay below a full-scale limit-cycle oscillation!
Well, you are not exactly correct with your assumptions. With noise-shaping the TPDF dither's loudness can be reduced without negative impact. For example, if your noise-shaping curve boosts higher-frequency noise-floor by 12 dB it's safe to reduce TPDF's loudness by 6dB, I've checked it, it works that way without creating distortion harmonics.
Image

Post

Aleksey Vaneev wrote: Well, you are not exactly correct with your assumptions. With noise-shaping the TPDF dither's loudness can be reduced without negative impact. For example, if your noise-shaping curve boosts higher-frequency noise-floor by 12 dB it's safe to reduce TPDF's loudness by 6dB, I've checked it, it works that way without creating distortion harmonics.
If I'm not mistaken, unit-step magnitude RPDF is enough to prevent harmonic distortion anyway and the reason we want to use TPDF is simply to prevent the signal from modulating the noise amplitude... so it actually seems plausible that you might not even need TPDF with noise-shaping if that takes care of the modulation problem. Who knows.

Personally I'm not a huge fan of noise-shaping 44.1kHz/48kHz audio anyway (ie. I played around with it at some point and concluded that as far as my ears are concerned it's just waste of time), so I admit I haven't really tried to optimise the dither.

Post

mystran wrote:Personally I'm not a huge fan of noise-shaping 44.1kHz/48kHz audio anyway (ie. I played around with it at some point and concluded that as far as my ears are concerned it's just waste of time), so I admit I haven't really tried to optimise the dither.
Agreed...not trying to start a fracas, just adding observations...no exhaustive study, just some thoughts through the course of past experiments...

1. Noise-shaped dither sounds crappy on some sources. Don't assume it's always better than TPDF.

2. TPDF dither always sounds fine, as expected.

3. Often (likely, for most multi-instrument songs, single instrument polyphonic songs...er, and even just most instruments), no dither at all sounds just as good as TPDF.

Ground rules for these observations:

I'm talking primarily about 16-bit. (It's silly to talk about dithered 24-bit—if you think otherwise, let's discuss physics—and we rarely use other sizes. If you need 8- or 12-bit, do what you want, I don't care, I'm just talking about formats that most people deal with as an end product.)

I left "golden ears" out of the equation by nulling with the original in the digital domain and adding enough gain to easily savor the quality of the error. The sound of the error is all that matters (the amplitude is a given)—it either sounds bothersome or it doesn't.

Some might submit that #3 is only true when there is sufficient noise already in the recording ("self dither"). Yes, that's often the case, but not what I'm getting at. I found that normal music (even without drums and cymbals) seemed to be chaotic enough, using the cleanest of sources, that the error lacked enough correlation to be perceived as anything other than unmodulated white noise throughout the song—nothing that stood out, even at artificially high levels. (I wanted to test this more, using mixes from big-time pros, but found they get cold feet about having their stuff looked at that closely, even if you promise that no one else will ever know, and the meaning of the test has no implications about the quality of the sources.)

(A reminder for anyone wanting to experiment: Don't fall into the trap of, say, dithering to 8-bit so the error is loud enough to hear easily, and making the assumption that dithering to 16-bit is the same thing, but 48 dB quieter—this is not a linear process, you can only determine the effects for 16-bit by doing 16-bit. Then null and add gain to examine the error qualities.)
My audio DSP blog: earlevel.com

Post

earlevel wrote:
Nowhk wrote: Visually, it seems to me that the input signal (2hz) is vanished. But I could be wrong.
So quantization (probably) do mess the original 2hz signal's frequency.
Again—to put it a different way, you're arguing semantics. When we talk about quantizing, we're talking about fitting a value in an available (or allotted) number of bits. Because we need to (or want to). The cost for less space is more error. In that sense, the signal that you show is still the original signal, plus error that makes it appear to no longer be that original signal.

In other words, if you're doing this as a processing block, OK, it does what it does. But when we're talking about the effects of sample rate and bit depth, that's implying a very specific thing—how well the original signal can be encoded and reproduced. And there we are talking about bandwidth and quantization error.

That's not to say your discussion doesn't have merits in pondering what the effects are in constraining quantization to the point that you're really discussing a certain kind of non-linear processing (a pretty unpleasant one). But that's an entirely different thing than questioning whether it's correct to say "The bit depth has no impact on the frequency response", while taking the claim out of context. ;-)
Sorry for the big delay, busy with work :cry:

Yes I was exagerating the example voluntarily. But isn't the same for sample rate? I mean, both will affect the freq response of the signal, not only the sample rate.

The statment "The bit depth has no impact on the frequency response" is the same as "The sample rate has no impact on the frequency response", if you consider BOTH above a certain limit.
Else, BOTH a would affect the freq response :O

I don't understand why it is attributed only to sample rate when (as show in the graph I posted, for example) it also depends by Bit Depth :ud:

Post

Nowhk wrote: Yes I was exagerating the example voluntarily. But isn't the same for sample rate? I mean, both will affect the freq response of the signal, not only the sample rate.
Sampling rate affects the available bandwidth. Bit-width affects the available dynamic range.

The point of view where neither affects "frequency response" is certainly valid, provided that your bandwidth is sufficient to fit all the frequencies of interest and your dynamic range is large enough that the noise floor can be considered negligible.

Post

Nowhk wrote:I don't understand why it is attributed only to sample rate when (as show in the graph I posted, for example) it also depends by Bit Depth :ud:
Gee, and I thought my Corvette-in-the-driveway analogy was so compelling :lol:

You're purposely reducing the accuracy till you can no longer represent your signal without overwhelming error, then you're looking at the result as if the error isn't material. (Don't make me talk about Laurel/Yanny :lol: )

OR, you’ve created a bit crushing processor, an audio effect, which can affect frequency. In that case, you’re not producing error, you’re producing an end result—I’m fine with describing that as frequency-altering, even if it’s heavily frequency dependent.

But say you don’t like those two choices, and insist on the point of view that, given horrifically bad converters with tiny bit-depth (accepted that you are talking more about theory than reality), frequency can be altered.

NOW: Having that point of view, please convey to your students, if you were teaching digital audio theory, exactly what the frequency effects of quantization is. That is, we know precisely what the effects of frequency response is for a given fixed sample rate—we can reproduce up half the same rate with a nice flat response (with some allowance for reality). And for bit depth, we can clearly state the dynamic range and error per bit of sample size.

Do you see how inappropriate that view is when describing a digital audio encoding/decoding system—DAC and ADC—and sampling theory? And the only purpose of that view is to describe the same thing that can be explained already by the huge error of minuscule bit depths. Why have two explanations, one worth talking about only at tiny bit depths—which we have no practical use for—while the other (traditional view) is comfortable at describing any bit depth? :wink:
My audio DSP blog: earlevel.com

Post

Given that you have some original signal -- as you turn down the level you get less of the signal and more of the quantization noise.

Then you start talking about the frequency response, but of what? The result can now be considered a mix of two separate signals, one being the original, which should keep its original frequency response, and the error signal, the quantization noise, which does not in any way need to have the same frequency response.

If then, by "frequency response" you mean of the resulting "mixed" signal, then YES, absolutely, a lower bit depth changes it, up to the point where it's basically just as much (or more) error as signal.

Post

But here you are discussing "original signal" + "noise added" by bit depth. I didn't mean this.

I meant that given a very low bit depth, it not only affect the quantity of noise added, but also the "original signal" (of course for very low level of Bit Depth; mine is not a scientific process/analysis, its a generic statement).
As a lower sample rate will affect (reduce, aliasing for example) the "original signal".

If I look at this viewtopic.php?f=33&t=504249&start=15#p7074425, I would says that the "original signal" content has been messed up (the 2hz frequency is vanished).
I've got another content (isolating the noise).
As when I reduce the sample rate: I reduce the "original signal" content :D

In the case of Bit Depth, yeah: the "different content" become from the noise added, sure! But that's not my point :D

Post

Nowhk wrote:But here you are discussing "original signal" + "noise added" by bit depth. I didn't mean this.

I meant that given a very low bit depth, it not only affect the quantity of noise added, but also the "original signal" (of course for very low level of Bit Depth; mine is not a scientific process/analysis, its a generic statement).
As a lower sample rate will affect (reduce, aliasing for example) the "original signal".
error = quantize(signal) - signal <-> quantize(signal) = signal + error

The "error" in this case is known as "quantization noise".
If I look at this viewtopic.php?f=33&t=504249&start=15#p7074425, I would says that the "original signal" content has been messed up (the 2hz frequency is vanished).
If you take FFT of the resulting signal, you'll realize that the original signal hasn't vanished anywhere. It just looks wonky because of the additive error.

Post

mystran wrote:error = quantize(signal) - signal <-> quantize(signal) = signal + error

The "error" in this case is known as "quantization noise".
...
If you take FFT of the resulting signal, you'll realize that the original signal hasn't vanished anywhere. It just looks wonky because of the additive error.
That's always been my thought since I've start to learn DSP and signal in generals.
BUT (a bit of story)...

The other day I was programming a mini effects, using IPlug/C++.
Which basically take the samples magnitudes and move them random along the Y axis before playback, using audio rate modulation for some group of samples.

Than I've thought: how could this generate a "signal + quantization noise" (that's varying both signal and noise during the time) and not a "signal + quantization noise" (which only quantization noise that's varying during the time)?

I'm basically do what will happens with a bad conversion (very low Bit Depth) that will mess the magnitude of each samples drastically.
I'm considerably changing each sample magnitude along the Y axis, and in fact, the whole sound change, not only the quantization noise. Thus not really "signal + variable quantization noise" (but a very different signal).

So I've realized: isn't moving the Bit Depth drastically affecting also the signal (content) itself as reducing sample rate does (in the hypothesis of elminiating the noise above it, and be able to listen to it)?
I've started to make some visual diagram, as the one above, and decided to open a topic like this.

Than: if calculating the FFT of the signal in that diagram will return a "signal x + error", why moving sample's magnitude casually around Y axis won't return the same "signal x + error"? Because signal x seems to change, really. Not only the noise...

I've untouch sample rate. Which relationship am I missing?

Post

Nowhk wrote: Than: if calculating the FFT of the signal in that diagram will return a "signal x + error", why moving sample's magnitude casually around Y axis won't return the same "signal x + error"? Because signal x seems to change, really. Not only the noise...
The "error" is signal dependent unless the signal has sufficient broadband component (read: is noisy enough). Such "sufficient broadband component" is guaranteed when you add dither (although it's also possible for a signal to already be noisy enough to begin with), at which point the quantisation error will be broadband noise. From the engineering point of view, failure to dither properly is essentially a bug and that's why your result is (again from the engineering point of view) garbage. Throw in the dither and all the theory will suddenly make sense.

Post Reply

Return to “DSP and Plugin Development”