basically, when you only have one bit your sine wave turns into a square wave. no interpolation, no nothing. and not a good bandlimited square wave, an actual square with harmonics going all the way to valhallaNowhk wrote:Let me make two extreme (limit) examples.
Ideal scenario (no additional noise, precise number, and so on).
I'm generating a "digital" sine wave, using Sytrus, 1Khz.
Values goes from 1.0/1.0 within the DAW.
Suppose my audio card is 16 bit. So it will range from 32,768 to +32,767, thus 1.0/1.0 is going to be mapped to this range.
1/32,768 (0,000030517578125) would be the min peak I could reach. Below, it will truncate to 0.
0,000030517578125 = 90.30899869919435db (so, just to notice, my previous example above will fail; 95db will just truncate all samples to 0, and I'll hear nothing but noise).
So let say I move my normalized knob within Sytrus to 0,000030517578125.
Here's the two extreme examples:
1  it could be that "all peaks" of the signal won't fall exactly at the sample position of for the samples rate "gird", so I could bump in a situation where my sine wave is totally lost (since all sample go to 0); it becomes just noise. I lost my original signal/sine wave. Is there any guaranteed threshold where the original signal is always preserved? Or it is "just a guess" for low amplitude signals, also depending by its frequency?
2  what if I choose a X signal and a Y signal with the same 0,000030517578125 amplitude that, "casually", get the 100° sample valued, and the others go to 0. How can interpolation discriminate two signal between X and Y having the same valued sample? Or this situation can't happen?
Bit Depth vs Frequency Response  Reflection

kamalmanzukie
 KVRist
 145 posts since 12 May, 2012
Re: Bit Depth vs Frequency Response  Reflection

mystran
 KVRAF
 4981 posts since 12 Feb, 2006 from Helsinki, Finland
Re: Bit Depth vs Frequency Response  Reflection
The SACDstyle 1bit situation is actually even more complicated than conventional noiseshaping, which is relatively simple. Unfortunately as FabienTDR pointed out above, you can't properly dither a 1bit signal, because you need at least 2 quantisation steps of headroom (for TPDF) to prevent saturation and the basic LTI treatment of noiseshaping actually runs in to same problem: now we are no longer shaping just the quantisation noise, but we're also trying to shape the additional distortion that results from saturation of the shaping process itself.Aleksey Vaneev wrote:It's all pretty complicated for understanding. SACD is 1bit signal, so even with with 1bit resolution you can have enormous SNR at chosen frequencies. Noiseshaping does that. IMO sampling theorem is incomplete without taking noiseshaping into account. Nonnoise shaped signal is a special case of sampling.
This can certainly produce useful results in practice (otherwise 1bit converters wouldn't work), but theoretical treatment of the whole thing gets way more complicated than the relatively simple treatment of noiseshaping with adequate headroom. To be honest I'm not even sure what the current academic consensus on the subject is, since there are plenty of papers arguing back and forth whether or not this whole thing is actually wellfounded or not.
So my two cents for someone learning about sampling and quantisation is to forget that 1bit signals are even a possibility and just assume there is enough headroom to fit TPDF dither and whatever noiseshaping (if any) you want to do.
ps. in theory you should also add dither even when you're noiseshaping, in order to decorrelate the error signal, but .. yeah.. point is.. the 1bit case is nasty.
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

kamalmanzukie
 KVRist
 145 posts since 12 May, 2012
Re: Bit Depth vs Frequency Response  Reflection
thats a whole other thing that what the op is trying to grapple with. sacd is that whole other type of overclocked encoding that ends up looking like a model of sound as compression and rarefaction, no?
i think maybe op is confused from that digital show and tell where he says the only difference in bit rate is added noise. this is only true if you dither, which means to add noise. without this, the sine becomes more like a square, which is a difference in the frequency response i suppose but the fundamental stays the same
i think maybe op is confused from that digital show and tell where he says the only difference in bit rate is added noise. this is only true if you dither, which means to add noise. without this, the sine becomes more like a square, which is a difference in the frequency response i suppose but the fundamental stays the same

Nowhk
 KVRian
 780 posts since 2 Oct, 2013
Re: Bit Depth vs Frequency Response  Reflection
I think I've choosed the wrong approch to try fixing my doubt Maybe its better get rid of bit depth, and see it from a samplerate/interpolation point of view:kamalmanzukie wrote:i think maybe op is confused from that digital show and tell where he says the only difference in bit rate is added noise. this is only true if you dither, which means to add noise. without this, the sine becomes more like a square, which is a difference in the frequency response i suppose but the fundamental stays the same
If I look at the spectrum of that yellow signal, I would see the partials that constitute the green signal + some noise floor (i.e. the partials that constitute the red signal).
Now my question: if I truncate 80% of those samples to 0 (or even keeping only 1 point per period of that signal), would the spectrum show the partials that constitute the green signal + more noise floor anymore?
You said "yes it will". Me said "conceptually I can't see how this can be accomplished". Having only 1 point per period seems to me not enough to interpolate a yellow signal that kept the original partials of the green signal. Nyquist said we need a number of sample double the frequency to buildback a signal. But it doesn't talk about the "quality" of samples. Is it just irrelevant? So I can just keep 1 sample and the other can do whatever they want?
Any tool that can help me to "visually" show up? Placing for example an array of samples and drawing the resulting Whittaker–Shannon interpolated signal? I'm trying to FlowStone meanwhile...

mystran
 KVRAF
 4981 posts since 12 Feb, 2006 from Helsinki, Finland
Re: Bit Depth vs Frequency Response  Reflection
There are really two things going on when you turn an analog signal into a digital signal. First, there is timequantisation (ie. "sampling") where the (average, in case it's irregular) sampling rate determines the available bandwidth. Then there is amplitudequantisation (often called just "quantisation") where we round the signal values to the closest representable finiteprecision number. Mathematically at least, you can do either of these without doing the other.Nowhk wrote:Now my question: if I truncate 80% of those samples to 0 (or even keeping only 1 point per period of that signal), would the spectrum show the partials that constitute the green signal + more noise floor anymore?
As far as truncating a certain number of samples to zero, that means lowering the effective sampling rate, which reduces the amount of bandwidth you can represent. This actually doesn't even introduce any noise as long as the signal fits into the new bandwidth (although in practice it's tricky to reconstruct irregularly sampled signals perfectly) and if it doesn't then you get aliasing.
The point though: do not get confused by the distinction between (amplitude)quantisation and sampling (ie. timequantisation). Your digital signals are subject to both, but each produces different artifacts.
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

BertKoor
 KVRAF
 10640 posts since 8 Mar, 2005 from Utrecht, Holland
Re: Bit Depth vs Frequency Response  Reflection
Please Nowhk, do NOT endlessly move goalposts again during discussions about your doubts. You have changed direction orthaginically once already.
We are the KVR collective. Resistance is futile. You will be assimilated.
My MusicCalc is back online!!
My MusicCalc is back online!!

Nowhk
 KVRian
 780 posts since 2 Oct, 2013
Re: Bit Depth vs Frequency Response  Reflection
Thats the weirdness I'm reflecting about So if I low the amplitude of a signal I'm sampling (given any bit depth) it automatically reduce the bandwidth of the signal that will be reconstruct? (since more part of the signal's samples will be truncated)mystran wrote: As far as truncating a certain number of samples to zero, that means lowering the effective sampling rate, which reduces the amount of bandwidth you can represent.
Sorry if you feel this. Just trying to see the doubt from different (related) point of view. Some stuff are not linear in understanding.BertKoor wrote:Please Nowhk, do NOT endlessly move goalposts again during discussions about your doubts. You have changed direction orthaginically once already.

mystran
 KVRAF
 4981 posts since 12 Feb, 2006 from Helsinki, Finland
Re: Bit Depth vs Frequency Response  Reflection
Well not really "automatically" because you will also need to modify the reconstruction algorithm to skip the samples that you removed. If you just feed them to a regular reconstruction algorithm that expect them to be valid sample values, then you have a process welldescribed by "garbage in > garbage out."Nowhk wrote:Thats the weirdness I'm reflecting about So if I low the amplitude of a signal I'm sampling (given any bit depth) it automatically reduce the bandwidth of the signal that will be reconstruct?mystran wrote: As far as truncating a certain number of samples to zero, that means lowering the effective sampling rate, which reduces the amount of bandwidth you can represent.
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

Nowhk
 KVRian
 780 posts since 2 Oct, 2013
Re: Bit Depth vs Frequency Response  Reflection
Why should I change the reconstruction algorithm? I'm about to observe the result using the same, else of course it could change.mystran wrote:Well not really "automatically" because you will also need to modify the reconstruction algorithm to skip the samples that you removed. If you just feed them to a regular reconstruction algorithm that expect them to be valid sample values, then you have a process welldescribed by "garbage in > garbage out."
I've mounted a tool using FlowStone; using the Sync Interpolation (thanks to martinvicanek for the module), sample rate 16 samples, 2hz signal, that's the result:
Here's the list of sampled values:
Code: Select all
0
0.707107
1
0.707107
1.22465e016
0.707107
1
0.707107
0
0.707107
1
0.707107
1.22465e016
0.707107
1
0.707107
Here's the list of samples value (note I've truncated the ones below 1/1, as for example):
Code: Select all
0
0
1
0
0
0
1
0
0
0
1
0
0
0
1
0
Yes, no or it depends?
You do not have the required permissions to view the files attached to this post.

mystran
 KVRAF
 4981 posts since 12 Feb, 2006 from Helsinki, Finland
Re: Bit Depth vs Frequency Response  Reflection
Assuming you set the samples to zero in a regular fashion, keeping every Nth sample, then this is equivalent to first downsampling by a factor of N (which will result in aliasing if you violate the Nyquist condition) and then zeropadding back to the original rate (which will create mirrorreplicates of the baseband spectrum).Nowhk wrote: The question: can I be sure that within this resulting signal there's ALSO the 2hz signal I feed in?
Yes, no or it depends?
None of this has absolutely nothing to do with quantisation though.
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

Nowhk
 KVRian
 780 posts since 2 Oct, 2013
Re: Bit Depth vs Frequency Response  Reflection
It was just an example. Nth samples are just casual. I'll provide another "irregular example"mystran wrote:Assuming you set the samples to zero in a regular fashion, keeping every Nth sample, then this is equivalent to first downsampling by a factor of N (which will result in aliasing if you violate the Nyquist condition) and then zeropadding back to the original rate (which will create mirrorreplicates of the baseband spectrum).
It should if the truncation is due to the few available bits on quantisation, when the signal has a low amplitude and thus huge part of its "body" is below the last bit. I'll provide a more concise examplemystran wrote:None of this has absolutely nothing to do with quantisation though.
Give me some times! Meanwhile, thanks for the support, as usual!

Aleksey Vaneev
 KVRAF
 3513 posts since 7 Sep, 2002
Re: Bit Depth vs Frequency Response  Reflection
Well, I have not tried to implement 1bit noiseshaped dither myself, but I do not see how it's impossible  you just keep the input signal e.g. at 6 dBFS and thus noise at a similar level. 16bit noiseshaped dithering also creates saturation  it's just rare as usually masters are kept at 0.3 dBFS so that added noise never saturates. 1bit case is extreme, but it's in no way special than e.g. 16bit case.mystran wrote:The SACDstyle 1bit situation is actually even more complicated than conventional noiseshaping, which is relatively simple. Unfortunately as FabienTDR pointed out above, you can't properly dither a 1bit signal, because you need at least 2 quantisation steps of headroomAleksey Vaneev wrote:It's all pretty complicated for understanding. SACD is 1bit signal, so even with with 1bit resolution you can have enormous SNR at chosen frequencies. Noiseshaping does that. IMO sampling theorem is incomplete without taking noiseshaping into account. Nonnoise shaped signal is a special case of sampling.

mystran
 KVRAF
 4981 posts since 12 Feb, 2006 from Helsinki, Finland
Re: Bit Depth vs Frequency Response  Reflection
First of all, for effective dither you need TPDF with magnitude of 2 quantisation steps (eg. [0, 2]) and that won't fit in 1bit even without any signal. The other "special" thing about 1bit case is that you don't just reach saturation, you're constantly stuck with it, because you're always in saturation one way. In fact, your filter (assuming some sort of highpass response at least) cannot even decay below a fullscale limitcycle oscillation!Aleksey Vaneev wrote: Well, I have not tried to implement 1bit noiseshaped dither myself, but I do not see how it's impossible  you just keep the input signal e.g. at 6 dBFS and thus noise at a similar level. 16bit noiseshaped dithering also creates saturation  it's just rare as usually masters are kept at 0.3 dBFS so that added noise never saturates. 1bit case is extreme, but it's in no way special than e.g. 16bit case.
In practice deltasigma converters and classD amplifiers certainly work, there's no question about that... but the point is that the theoretical treatment is a mess and it's not all that straightforward to come up with any hard guarantees about the resulting quality. In comparison, with adequate headroom in a multibit system you just add dither, then you have the original signal and uncorrelated additive noise and then you can take the error and put an LTI filter into the loop and you'll end up with the noise shaped to match the filter response.
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

earlevel
 KVRist
 474 posts since 4 Apr, 2010
Re: Bit Depth vs Frequency Response  Reflection
Hi Nowhk,Nowhk wrote:I've seen recently...The bit depth has no impact on the frequency response, which is constrained by the sample rate.[/i].
Let me make an example to show you my doubt.
...The available samples for sampling that signal within the SNR are limited now (I'm near the noise floor), so most of them will be "truncated" at 0.
...So, is it correct to state that "The bit depth has no impact on the frequency response"? Because if I reach a point where the few samples can't recreate the signal (even if I'm within the available dynamic range), that state becomes weird.
Quantization introduces error in the rendering of a signal—it doesn't change the signal itself, so no change in frequency response.
Say I have a front door with a peephole lens, facing my driveway. A curved lens distorts what I might see, but close enough. But let's say I live on a planet without glass, so we use a crystalline structure that that can't be ground smooth, but if you cleave enough flat facets on it, it works out pretty much the same. Through it, I see my friend pull up in a red Corvette (because this planet is still pretty cool).
My peephole breaks one day, slammed the door too hard, I'm between paychecks and replace it with a cheapie for now, with few facets and a lot of optical error. I'm expecting my friend, hear some rumble, look out the peephole and see a red blob. It could be a red Yaris, who knows.
Assuming it's indeed my friend, is it still a Corvette? Yes, my distorted view didn't change that—I'm viewing a red Corvette plus error.
It's all point of view, sure. You could consider a horrifically bad DAC as a signal processor, and comment about how the signal is now different after running through it. But a DAC's purpose is accuracy, so it makes sense to separate its error from the signal. Similarly, you could imagine a DC signal and a dynamically changing bit depth as potentially changing that 0Hz to a different frequency. But that's not what we're talking about when we talk about bit depth, so you'd be gaming the explanation.
My audio DSP blog: earlevel.com

Nowhk
 KVRian
 780 posts since 2 Oct, 2013
Re: Bit Depth vs Frequency Response  Reflection
I'm trying to understand if maybe we would say "it doesn't change the signal itself in most/real cases", not always. I'm reasoning theoretically.earlevel wrote:Quantization introduces error in the rendering of a signal—it doesn't change the signal itself, so no change in frequency response.
Here's an attemp trying to do a Sync Interpolation after a Bit Reducer (such as mapping 1.0/1.0 to N bit), 10db, 2Hz, 16 Bit sample rate, Bit Depth 2 bit:
Visually, it seems to me that the input signal (2hz) is vanished. But I could be wrong.
So quantization (probably) do mess the original 2hz signal's frequency.
Of course its an extreme example, caselimit, and it wouldn't happens in any recent/decent/real DAC/situation (as you said: a DAC's purpose is accuracy, so it makes sense to separate its error from the signal).
But that wasn't my question, was more (as I said) theoretically, not practical
You do not have the required permissions to view the files attached to this post.