Oversampling LPF question

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

I have a conceptual question for the SRC experts here.

Upsampling by a high amount, say 64x, typically requires inserting 63 zero samples after every source sample and then running that through a large, steep LPF (usually FIR). That LPF processing basically interpolates the values for the 63 new sample points.

Now, given that you only have one actual sample point every 64 samples followed by 63 zeros, why wouldn't you get some droop (ripple) between those samples in the resultant LPF-filtered signal?

So, in other words, is ripple a side effect in oversampling?

(I understand 64x is an extreme example and is not typically done in one step - but the question is if ripple modulation is a consideration/factor in upsampling)

Post

Typically the kernel is large enough to cover multiple non zero sample points to avoid excessive signal drop outs. Of course the filter will always introduce some ripple, but not an excessive droop over zero samples that I think you mean.

In the case of x64 vs, say, x2 and assuming we want the same quality in both cases then we need to define the design parameters. Say, just as an example, transition band from 16khz to 22050 and the stop band attenuation of -100db extending to nyquist and passband ripple of 0.1db. At x64 the ratio of the transition band to nyquist is 32 times less than at x2 (this is a steeper performance criteria). This means that you will most likely need a much larger kernel at x64 than for x2 oversampling.

Post

Fender19 wrote: Sat Feb 15, 2020 6:58 pm Upsampling by a high amount, say 64x, typically requires inserting 63 zero samples after every source sample and then running that through a large, steep LPF (usually FIR). That LPF processing basically interpolates the values for the 63 new sample points.

Now, given that you only have one actual sample point every 64 samples followed by 63 zeros, why wouldn't you get some droop (ripple) between those samples in the resultant LPF-filtered signal?
What the zeroes do is simply replicate the spectrum as mirror copies at the higher frequencies, which the low-pass filter is then supposed to filter out. The reason we stuff with zeroes is because this keeps the "base-band" as-is, so any "ripple" comes purely from the actual low-pass filter (which in practice always has slightly less-than-ideal response in comparison to a true sinc).

That said, while this is how it works conceptually you shouldn't actually fill in the zeroes. Rather you should take the original FIR, then perform a "poly-phase decomposition" to get "64 branches" with different fractional delays. Then you filter the original signal with each one of these separately and interleave the outputs. This is mathematically equivalent, it computes the exact same thing, but avoids all the actual multiply-adds with the zeroes that would have been inserted in the naive approach (ie. in theory it runs about 64 times faster).

Post

mystran wrote: Sun Feb 16, 2020 1:06 amThat said, while this is how it works conceptually you shouldn't actually fill in the zeroes.
A matter of POV. I still think that this whole stuff of filling zeros in an unnecessary complication in the reasoning, a much more intuitive view is to restore the continuous-time signal and then sample it back (since we're thinking in conceptual terms, we should be able to do that "conceptually", while practically this is obviously impossible staying in digital domain :) )

As to answer the original question. The key point IMHO here is how we define the "ripples", which in turn depends on how we interpret the discrete time signal. A discrete time signal is a sequence of points and what are "ripples in a sequence of points"? Do we intuitively connect points with a smooth line and see if this line has ripples?

In that regard, the upsampling (assuming it is done with sufficient enough quality) simply puts more discrete points onto a continuous line which is _already_ implied by the discrete time signal. But it doesn't modify the signal per se. Which means, it might make some of the ripples of the implied signal visible, but doesn't create any new ones. An example of this would be upsampling a discrete time impulse or a step. These signals inherently contain ripples, but they are just not visible at the original sampling rate, the resolution is simply too low.

The ripples caused by sharp cutoff LPF (so called Gibbs phenomenon) would be created in an ADC's LPF, which samples and bandlimits a continuous time signal. In the upsampling process there is no extra bandlimiting (the bandlimiting which happens with zero stuffing approach is to remove the new frequency components which we introduce into the signal by stuffing the zeros, but it doesn't modify the "original" part of the signal, since it's LPF-ing with the same cutoff as the original ADC already did). Thus the upsampling itself doesn't create any new ripples.

Post

Z1202 wrote: Sun Feb 16, 2020 9:54 am
mystran wrote: Sun Feb 16, 2020 1:06 amThat said, while this is how it works conceptually you shouldn't actually fill in the zeroes.
A matter of POV. I still think that this whole stuff of filling zeros in an unnecessary complication in the reasoning, a much more intuitive view is to restore the continuous-time signal and then sample it back (since we're thinking in conceptual terms, we should be able to do that "conceptually", while practically this is obviously impossible staying in digital domain :) )
Whatever. The point I was trying to make is that actually filtering zeroes is a ridiculous waste of CPU.

Post

Fender19 wrote: Sat Feb 15, 2020 6:58 pm I have a conceptual question for the SRC experts here.

Upsampling by a high amount, say 64x, typically requires inserting 63 zero samples after every source sample and then running that through a large, steep LPF (usually FIR). That LPF processing basically interpolates the values for the 63 new sample points.

Now, given that you only have one actual sample point every 64 samples followed by 63 zeros, why wouldn't you get some droop (ripple) between those samples in the resultant LPF-filtered signal?

So, in other words, is ripple a side effect in oversampling?

(I understand 64x is an extreme example and is not typically done in one step - but the question is if ripple modulation is a consideration/factor in upsampling)
More details in my series here, but I'll try to give you the short version.

I don't want to start a fight, but samples represent impulses, instantaneous values of the original (suitably bandlimited) signal. And it's zero in between. This flips some people out for some reason, but rest assured there is such a thing as analog sampling (you know, as in analog delay lines), and rest assured, analog sampling ideally puts out impulses with a zero voltage level in between. Something like this (just look at the left one, the other shows using a sample and hold to measure, producing the same thing):

Image

Digital, we store only the sample values, but again if this were analog sampling there would be the zero-level baseline between them as shown. So, if we inserts a zero sample between each, what has changed? Nothing, except the sample rate. We didn't add any more information. For that reason, the spectrum doesn't change, just the rate of sample. Instead of

Code: Select all

7   8   9      [48 kHz]
7 0 8 0 9 0... [96 kHz]
Note I've drawn that so the original samples line up, since the second, zero-padded one is emitting samples twice as fast. Here's an example fo the original spectrum. The impulse modulation causes the mirrored spectral images, and which we get rid of on playback by using the DAC's anti-imaging filter.

Image

You can see why our audio band is limited to half the sample rate (sample rate indicated as Fs). I said that inserting the zeros changes nothing but the sample rate. So, look at that pic and image that the Fs moves to replace 2Fs (mentally erase the current Fs and its line, as well as 3Fs. That is, the sample rate doubled, but the spectrum stayed where it is. That's why we need to lowpass filter after inserting zeros, set to what is now a quarter the new sample rate, to erase the images (red lines) that are now exposed in our new, wider audio band.

I'm just shooting for enough info here to give you something to think about and look into. It's late ;)
My audio DSP blog: earlevel.com

Post

earlevel wrote: Mon Feb 17, 2020 9:29 amI don't want to start a fight, but samples represent impulses, instantaneous values of the original (suitably bandlimited) signal. And it's zero in between. This flips some people out for some reason, but rest assured there is such a thing as analog sampling (you know, as in analog delay lines), and rest assured, analog sampling ideally puts out impulses with a zero voltage level in between.
I think this impluse thing is just an auxiliary means to aid the application of Fourier theory to discretized signals and doesn't have that much usefulness beyond that (that's one of the reasons why I'm so much against this whole zero-filling stuff). There is nothing wrong with this, as long as we're trying to find a way to connect the continuous-time Fourier analysis to discrete-time signals, but beyond that I personally see little reason for this way of thinking. Discrete-time signals and continuous-signals exist in completely different media (the former actually exist only conceptually in our imagination, in physical world they are represented by some completely different signals) and cannot be compared.

Analog delays to my best knowledge (although I'm not really an expert) do SnH (ZoH) rather than PCM.

Post

Z1202 wrote: Mon Feb 17, 2020 9:55 am
earlevel wrote: Mon Feb 17, 2020 9:29 amI don't want to start a fight, but samples represent impulses, instantaneous values of the original (suitably bandlimited) signal. And it's zero in between. This flips some people out for some reason, but rest assured there is such a thing as analog sampling (you know, as in analog delay lines), and rest assured, analog sampling ideally puts out impulses with a zero voltage level in between.
I think this impluse thing is just an auxiliary means to aid the application of Fourier theory to discretized signals and doesn't have that much usefulness beyond that (that's one of the reasons why I'm so much against this whole zero-filling stuff). There is nothing wrong with this, as long as we're trying to find a way to connect the continuous-time Fourier analysis to discrete-time signals, but beyond that I personally see little reason for this way of thinking. Discrete-time signals and continuous-signals exist in completely different media (the former actually exist only conceptually in our imagination, in physical world they are represented by some completely different signals) and cannot be compared.

Analog delays to my best knowledge (although I'm not really an expert) do SnH (ZoH) rather than PCM.
Analog doing ZoH is just an implementation detail. It’s easier/cheaper to do that and fix the error. If you accept that the spectrum repeats (I think you did, but on my phone now), that corresponds to impulses (in analog, PAM, in digital, PCM).

I understand your reason for wanting to look at it the way you prefer. Samples do represent a pulse train, and I disagree that it’s less useful to look at it as such. It actually makes the Fourier connection pretty trivial. Signal AM by a unit train, which is itself the summation of cosine series of nFs. You only need understand AM for everything to be obvious.

Please explain it how you like. I’m just disagreeing with you that there are shortcomings to explain it the way it is. Discrete analog existed before discrete digital.
My audio DSP blog: earlevel.com

Post

earlevel wrote: Mon Feb 17, 2020 10:21 amAnalog doing ZoH is just an implementation detail. It’s easier/cheaper to do that and fix the error. If you accept that the spectrum repeats (I think you did, but on my phone now), that corresponds to impulses (in analog, PAM, in digital, PCM).
Would you share a bit more detail why is that an implementation detail and not vice versa: the PCM coding is just "an implementation detail" of Fourier analysis of discrete signals? Also I'm not sure what do you mean by "fix the error". The "error" introduced by PCM coding is way larger than the one of ZoH, so it is the former one which has more need "to be fixed".
earlevel wrote: Mon Feb 17, 2020 10:21 amI understand your reason for wanting to look at it the way you prefer. Samples do represent a pulse train, and I disagree that it’s less useful to look at it as such. It actually makes the Fourier connection pretty trivial. Signal AM by a unit train, which is itself the summation of cosine series of nFs. You only need understand AM for everything to be obvious.
I think I made it clear that I totally support the use of PCM for Fourier analysis. My point was that it's not useful beyond that and you didn't bring any examples of that either.

Post

earlevel wrote: Mon Feb 17, 2020 9:29 am I don't want to start a fight, but samples represent impulses, instantaneous values of the original (suitably bandlimited) signal. And it's zero in between. This flips some people out for some reason, but rest assured there is such a thing as analog sampling (you know, as in analog delay lines), and rest assured, analog sampling ideally puts out impulses with a zero voltage level in between.
I would like to argue that the view-point where the samples are simply values of a band-limited function (uniquely determined by said samples) at particular time-instants is actually more general than the "impulses with zeroes in between" because the former readily generalizes to non-uniform sampling where as the latter really doesn't.

Post

Z1202 wrote: Mon Feb 17, 2020 11:07 am
earlevel wrote: Mon Feb 17, 2020 10:21 amAnalog doing ZoH is just an implementation detail.
Would you share a bit more detail why is that an implementation detail and not vice versa: the PCM coding is just "an implementation detail" of Fourier analysis of discrete signals?
I didn't say "implementation detail of Fourier analysis of discrete signals". It's a practical implementation detail of sampling. In chemistry, we might have something we want to accomplish, like replace a carbon bond with a chlorine bond in a certain molecule. If someone stated that, and another person agued, "no that's wrong, because we really need to...combine...temperature...blah blah", they would be arguing implementation details, the end result is the same as doing the replacement. For ADC, a flash converter with comparators is an implementation detail, as is successive approximation. S&H is an implementation detail. There is nothing in sampling theory that says we need S&H, it just facilitates the job, lets us be more accurate, cheaper.
Also I'm not sure what do you mean by "fix the error". The "error" introduced by PCM coding is way larger than the one of ZOH, so it is the former one which has more need "to be fixed".
No. With PCM you just need a lowpass filter with a flat passband. With ZOH you need a lowpass filter and you need to fix the (sinc-shaped) droop (or run at high enough rate oversampling that the droop is acceptable).

Here's what it looks like (by "real DAC", the title means one with ZOH—I'd prefer "practical" or "typical" because you could make one without ZOH...funny, I just looked up wikipedia's definition of ZOH before hitting send, and in the very first sentence it describes ZOH as "a mathematical model of the practical signal reconstruction done by [DACs]"); at half the Nyquist frequency, response is -3.92 dB:

Image
earlevel wrote: Mon Feb 17, 2020 10:21 amI understand your reason for wanting to look at it the way you prefer. Samples do represent a pulse train, and I disagree that it’s less useful to look at it as such. It actually makes the Fourier connection pretty trivial. Signal AM by a unit train, which is itself the summation of cosine series of nFs. You only need understand AM for everything to be obvious.
I think I made it clear that I totally support the use of PCM for Fourier analysis.

My point was that it's not useful beyond that...
Even if it isn't, that was precisely what I wanted to show.
...and you didn't bring any examples of that either.
How late should I have stayed up showing that that properly to fender on kvr? I linked to my series of articles explaining in sufficient detail. Or are you asking me to show something to you? Sorry, I'm just unsure whether you are complaining about my lack of rigor or thoroughness on this board, or whether you just didn't like the explanation. I figure if someone has a question about a particular thing, they'll ask for more detail.

BTW, when I said, "I don't want to start a fight, but samples represent impulses", I meant that I expected someone would argue vehemently that samples were sinc functions. :lol:
My audio DSP blog: earlevel.com

Post

earlevel wrote: Mon Feb 17, 2020 9:29 am More details in my series here, but I'll try to give you the short version.
Thank you for the excellent explanation and link.

My mis-guided thoughts in my OP stems from my working with dynamics processors. I tend to think of LPFs like envelope detectors that quickly ramp up at an impulse and then decay until the next impulse.

But that is not what is happening in an oversampling LPF - or MOST filters for that matter. The "attack and decay" times of filters are usually symmetric. The output does not jump up at a sample then decay with the zeros like a compressor side chain would - it climbs gradually and decays gradually.

Now, that said, there IS some ripple in an anti-aliasing LPF and that ripple is what remains after the attenuation of the filter. In fact, if you use a filter with too high of a cutoff frequency or poor attenuation that "clocking" ripple will be clearly visible on a scope (and even audible as digital whine if the sample rate is low).

The next general misunderstanding - perpetuated by many textbooks - is the illustration of digital sampling as a series of stair steps. That comes from the fact that the output from a PCM DAC or analog BBD looks like that on an oscilloscope before it goes into the LPF. But the fact is that each of those "steps" represents a POINT in time, not a PERIOD of time - and that is where many (self-included) get confused. Illustrating samples as lollypop points, as you have done, makes it much more clear.

The final area of confusion for many is the "Nyquist" theorem - i.e., "how can only 2 samples fully represent a wave?". It took me a long time to understand this but then one day the lightbulb came on. The Nyquist theorem has a critical requirement - a perfect clock. It is the TIMING of those two points that is critical to re-creating the original waveform. If you "jitter" those points a little to the right or left the re-constructed waveform becomes drastically erratic. That is why A/D/A systems with poor clocks sound harsh - the high frequency audio material can be wildly mis-represented. (And since it is not physically possible to make a perfect clock the audio is always band limited somewhat below Nyquist).

Anyhow, I believe I have a better understanding now. Thank you for the discussion and help. Great website BTW. I have it bookmarked. :tu:

Post

Fender19 wrote: Mon Feb 17, 2020 7:14 pmThe next general misunderstanding - perpetuated by many textbooks - is the illustration of digital sampling as a series of stair steps. That comes from the fact that the output from a PCM DAC or analog BBD looks like that on an oscilloscope before it goes into the LPF. But the fact is that each of those "steps" represents a POINT in time, not a PERIOD of time - and that is where many (self-included) get confused. Illustrating samples as lollypop points, as you have done, makes it much more clear.
Steps are generally misleading. Lollypops are okay, if you want to think in terms of abstract repeating spectra. For most intuitive reasoning, the continuous time signal implied by the samples is probably the best, as this is what you're gonna hear at the DAC output (rather than any kind of a lollypop sequence, which actually has infinite power and cannot exist in reality). E.g. taking any wave editor with a decent interpolation display you can compare your original signal with the upsampled signal and see that there are no new ripples (whereas you may be surprised to see the ripples in your original signal already, if you're used to steps). With lollypops you'll maybe see the ripples only in the upsampled signal, but that doesn't mean that they were not there before, this is one of the misleading aspects of lollypops.

I guess the reason why most theoretical books will do more or less exclusively with lollypop idea, is because they are... theoretical (rather than intuitively practical), plus they might be focusing on general DSP rather than music DSP, or simply follow the common practice from general DSP. Steps are more intuitive but very misleading, frankly speaking I have yet to think of a theoretical DSP book which advocates the idea of the steps, though. The proper continuous time signal is intuitive and not misleading. YMMV

Post

Fender19 wrote: Mon Feb 17, 2020 7:14 pmThe final area of confusion for many is the "Nyquist" theorem - i.e., "how can only 2 samples fully represent a wave?".
A minor point, perhaps, but only frequencies up to nyquist can be fully described. The phase of nyquist will effect the magnitude, in the worst case the nyquist frequency could be invisible to the sampling process.

Post

Fender19 wrote: Mon Feb 17, 2020 7:14 pm The next general misunderstanding - perpetuated by many textbooks - is the illustration of digital sampling as a series of stair steps. That comes from the fact that the output from a PCM DAC or analog BBD looks like that on an oscilloscope before it goes into the LPF. But the fact is that each of those "steps" represents a POINT in time, not a PERIOD of time - and that is where many (self-included) get confused. Illustrating samples as lollypop points, as you have done, makes it much more clear.
Yes, this confuses people. Especially because they might see it both on the ADC and the DAC process depictions. For the ADC, though, an S&H step is functionally equivalent to instantaneous sampling, because it's only a measurement convenience (we can measure more accurately given time to settle, like weighing yourself on a bathroom scale). We only keep the held moment, so the only price is a little latency. But for the DAC, it's a different thing, we do it knowing it's wrong but it has advantages and is easy to adjust for.

BTW, you have a scope, you can do this experiment if you have signal generators or want to generate from your computer. (I just got a scope last week! I haven't done serious electronics in years, but rehabbing old gear lately. I could only afford the most basic scope for home when I was young, and had the good stuff at work. And eventually only needed to look at audio, easy enough to just sample and use the computer. But so much bang for the buck in digital scopes these days.)

Some people strongly resist (read: tell me I don't know what I'm talking about) the idea of sampled audio being a modulation with a pulse train. So, I was considering about making some circuits to demonstrate it in the analog domain, where they could be no mysterious digital process (in their minds) involved. Then, I realize I already had everything I need in my old Aries modular synth despite decades of neglect.

OK, I happened to have a balanced modulator module in my Aries system to do the AM, so I guess that's the hard part for most people, But I suppose you could at least get the visual idea by using two channels of the scope and multiplying.

Two oscillators: A sine as the signal, since it's easy to spot (you can also mix a couple of sines, and maybe vary the amplitude or frequency during the test). And a narrow pulse oscillator as the modulator, running at more than twice the highest freq you'll hav in the signal—but not too high in the audio spectrum, so you can see more if you want to look at it in a DAW. Maybe 1 kHz for the modulator. Multiply (balanced modulator).

You'll get a spiky outline of the signal, so it'll look best if the signal is a tenth that of the modulator (the analog sample rate). If your scope has a good FFT, you can play with it there, but otherwise sample it into your DAW and view it in a spectrum analyzer.

Listening to the modulated signal will sound horrible, but as soon as you run it into a good filter (might take multiple synth filters), there's your signal in its original glory.

Here's an example on scope, with a sine signal VCO, the pulse train VCO, and the balancer modulator's output.
AM3d.png
Here's a quickie analysis. I'll show just the modulator output, but you can infer the look of the two inputs from this. A couple of strong-ish harmonics on the 100 Hz sine output (maybe just a little overdriven, I calibrated the sine the best I could, maybe another one of my oscillators would have been purer, but I kept this as it gave me some harmonics to look at). I set the pulse to a thin as it would reliably trigger every time, but of course it's not a perfect impulse, and you can see the amplitude dropping for higher harmonics (hard to say how much is the spectrum analyzer, I get different peak falloff results depending on which one I use).

So the signal has harmonics of 100, 200, 300, and the sample rate is 1 kHz. You can see the mirrored signal images around each kHz, and if you change the signal you can see that in the images. If you run the modulated signal through a filter plugin (if nothing else, just use an eq plugin with a couple of 24 dB/oct sections set below half the 1 kHz "sample rate"). For AM, a linear frequency axis would be better, but I don't think I have such a plugin handy. With linear, it would look more like how the sampled spectrum is shown in DSP books. (Oh yeah, I have a little sine offset, which causes some of the modulator to show up, 1k, 2k, etc., and probably part of that is the balanced modulator not being perfect. But close enough to get the idea.)
AM3spectrum.png
Anyhow, I believe I have a better understanding now. Thank you for the discussion and help. Great website BTW. I have it bookmarked. :tu:
Thanks! I'm working on a new video to explain/demo sampling theory...the first part is almost done, hope to get that part out soon.
You do not have the required permissions to view the files attached to this post.
Last edited by earlevel on Wed Feb 19, 2020 7:40 pm, edited 1 time in total.
My audio DSP blog: earlevel.com

Post Reply

Return to “DSP and Plugin Development”