Oversampling Oscillator: Stuff or Calc?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Hi all,
as a kind of learning project I built an osc for MaxMSP, here, but it's pretty noisy, and I'd now I'd like to oversample to improve quality. This is probably an obvious question, and I've dug around a lot but managed to confuse myself. If I oversample at eg. 8x, should my approach be:

1) Calculate the output samples at the higher (eg 8x) sample rate then apply brickwall filter & decimate.
2) Calculate the output samples at "normal" (eg 1x) sample rate, zero stuff the other 7 samples, then apply brickwall & decimate?

barring any optimisation tricks that it might prevent me doing, 2 seems likely to be cheaper. But is this somewhere that zero stuffing is not appropriate?

Thanks
Dan

Post

2) is equivalent to doing nothing at all. ;)

Post

Not quite true - at the very least it does waste a few processor cycles :)

More seriously, though are you sure you're right? It does change the spectrum above original sample freq to have 0'd samples between the original samples (or at least "actualise" the shadow spectrum that was there)?
And then filtering with corner freq of eg. 20k* at sample rate of 352800 (8x 44100) with those 0'd samples in place will result in a different spectrum than if I'd just calculated at 44100 and not done the rest? Won't it?

And if it's not true then I don't see how either process would have an effect - if I calculate at at 8x 44100hz then if I select every 8th sample (eg decimate without filtering) it will be exactly the same as the original sample set - same as if I zero stuff. The main difference between the two methods is in what content is added above the original nyquist for the brief time the wave exists at the higher sample rate, no?

I am pretty clueless here, admittedly, and happy to be corrected!

*(I'll probably, in fact be filtering at corner of about 19k, with a less-than-completely-brickwall filter because so far as I can tell, I can do that and still cut the significant freqs above nyquist only with a cheaper filter)

Post

If you run your oscillator at 1x, upsample, filter, then downsample, you've effectively done nothing, because there was no content above your initial Nyquist to begin with.

IOW, if you run your oscillator at 44.1 kHz, then there won't be anything above 22.05 kHz by definition. If you run it at 4x, or 176.4 kHz, then your oscillator can create harmonics up to 88.2 kHz before they begin to fold over. If an oscillator is not bandlimited (e.g. "naive" sawtooth or square wave), or e.g. is distorted or heavily compressed, that is when you gain the advantage of oversampling. So,

oscillator at 8x -> lowpass -> processing -> lowpass -> decimate

I would suggest lowpass before and after distortion, and of course before decimation.
It does change the spectrum above original sample freq to have 0'd samples between the original samples
Yes. Zero-stuffing creates mirror frequencies above your original Nyquist. Processing without filtering those mirror frequencies first is identical to not zero-stuffing at all. Think of a waveshaper like a cubic clipper, for instance. It does nothing to the zeros, they pass right through, and the only thing affected is that original sample.

Post

i don't know what happened to the field since dspguide

or maybe i do?

i've seen a few "qualified" persons speak against zero padding. my pov on it -

done it, couple times actually, worked

my pov on why other people dissuade -

it would be better if your product used an insane amount of cpu and was total crap and everyone hated it so i can be a rich person and you can die and be nobody ever ever ever

:shrug: looked and sounded alright to me, and those old guys who wrote that dsp stuff. i'm sure new guys who are all kinds of awesome have come up with some new rules about what happens with numbers and sound. they must have found something out that's really great. someone will be along now to show you how it's bollocks.
you come and go, you come and go. amitabha neither a follower nor a leader be tagore "where roads are made i lose my way" where there is certainty, consideration is absent.

Post

Thanks Saul

What I'm doing is something close to naive triangle - or at least close enough for this discussion.

I'm still not sure I completely understand. My reasoning is :

Case 1) I generate a naive triangle at 44.1k and then upsample with zero stuffing,
Case 2) I generate a naive triangle at 8x 44.1k.

In both cases every 8th sample is identical, the samples between are what differ - the content above 22k. That's what I'll decimate to.
Eg if I then used an Lth band FIR filter with L=8, as I understand it, I'd only be using the 8th sample anyway - so those other 7 samples are not being used, let alone affecting the result...

I had thought the antialiasing came in through filtering at 8x44.1 at a corner frequency below 22k?

I'm sure you're right and there must be some step I'm misunderstanding. I've tried to leapfrog a bunch of DSP theory here but I'm going to have to explain it to myself properly before I'm happy. I grabbed Lyons' Understanding DSP book cheap online and I'm just going to work through the relevant chapters.

Post

If you generate at 1x and zero-stuff, then you simply get the 1x spectrum with images (ie. mirrored replicates) of the 1x spectrum at the higher parts of the spectrum due to zero-stuffing. If you filter these out with a brickwall, you do have an oversampled signal for sure, but the contents in the first 1x part of the spectrum is exactly the same in a naive-approach. In other words: the aliasing happens when you sample at 1x which is (at least in the mathematical sense) before the zero-stuffing is done, hence the zero-stuffing has no effect on it.

Doing the above is entirely sensible thing to do when you need an oversampled signal for your next stage of processing (like filter or whatever) but it doesn't improve the actual output of the actual oscillator in any way; you are simply sampling a signal at 1x and then oversampling the result (together with all the aliasing).

Also .. for stuff like classic analog oscillators like saw/pulse doing 8x (properly!) essentially reduces your aliasing noise floor by log2(8)*6dB = 18dB. In practice, to get a relatively clean sounding oscillator (which is obviously quite subjective) you generally need something around 60dB or so attenuation (at typical musical frequencies; for high fundamental this is not enough), so using brute-force oversampling, you'd probably be looking at approximately 1024x (since pow(2,60/6)=1024).

TL;DR: oversampling is not very effective method for reducing aliasing in oscillators, even when done properly.

edit: One might conclude that for a signal like triangle where the spectrum falls at 12dB/octave, one could reduce the number of octaves of oversampling by a factor of two to get the same noise floor. This is true to some extent, but the reduced harmonics produce less auditory masking so one generally needs a better noise floor to keep a similar perceived quality. In practice this means that typically one is looking for roughly similar oversampling ratios in either case.

Post

Cheers,
Yeah, I'm aware that oversampling (properly :) ) x8 alone isn't going to silence the foldover here.
Basically the application is a bank of chaotic interracting variable symmetry triangle oscillators. I'm using PTR and limiting gradients, but still getting a fair bit of aliasing noise - I just want to reduce that a bit further. When I tried 8x oversampling & filtering by running it within a poly~ in MaxMSP I got a significant improvement in sound quality. I figured I could do that more efficiently by coding it, and in the process learn a bit of dsp.

It's inherently noisy anyway - this isn't virtual analog territory - so there's a point at which it becomes diminishing returns. Really I just want to give the sound a bit more definition.

Post

Case 1) I generate a naive triangle at 44.1k and then upsample with zero stuffing,
Case 2) I generate a naive triangle at 8x 44.1k.
In simple words, the difference is that in [2)] you filter *before* decimation. And while at filter's output you use only every 8th sample, the filter itself uses all samples, thus you decimate a signal with those offensive harmonics (partially) filtered. For the rest see mystran's answer.

Post

Cheers, that does make sense.
I think I'd maybe misunderstood details of Lth band filters (along with everything else).
If nothing else this thread has taught me I need to go back to basics.
Lyons here I come.

Post

It might be helpful think in terms of information. If you add zeroes, that doesn't result in any additional information (since it's just zeroes) so there's no reason to believe your signal should become "better" (and in fact it doesn't; in frequency domain you just get mirror copies).

If on the other hand you actually calculate 8 times as many samples, then you indeed have 8 times more information. For this you can then filter out the information you don't need (eg. high frequencies that would alias), ie. make a better selection of which data is important and which data is not. :)

Post

Just took a quick skim, try to add a different perspective...

OK, so you're thinking about the aliasing in the frequency domain, but consider it in the time domain: The reason you can't make a naive square wave, for instance, is that you can't put the edges exactly where you want them (except for periods that are multiples of the sampling period). So the edges jitter around over time. A half-cycle that is a little over 4 sample periods wide will be exactly 4-wide, mostly, then periodically 5-wide. In other words, it will be modulated, and the modulation will be related to the sample rate, and create other frequencies.

How do you cure it? Have a shorter sampling period so that you can put those edges closer to where you need them!

So, obviously you need to create your waveform at the higher rate to have more resolution on where those edges land. Creating them at the lower rate and then sample-rate-converting them will not "correct" the edges—it knows not where you intended them.
My audio DSP blog: earlevel.com

Post

xoxos wrote:i don't know what happened to the field since dspguide

or maybe i do?

i've seen a few "qualified" persons speak against zero padding. my pov on it -

done it, couple times actually, worked
Sure it works, but is there any reason to zero stuff when you can avoid it? E.g. if you just repeat samples you'll end up with a crude lowpass filter, which should improve the overall result substantially (combined with the actual anti-alias filter). As mystran pointed out, zero stuffing gives you no alias reduction whatsoever.

Richard
Synapse Audio Software - www.synapse-audio.com

Post

Richard_Synapse wrote: Sure it works, but is there any reason to zero stuff when you can avoid it? E.g. if you just repeat samples you'll end up with a crude lowpass filter, which should improve the overall result substantially (combined with the actual anti-alias filter). As mystran pointed out, zero stuffing gives you no alias reduction whatsoever.
Zero-stuffing gives you no high-frequency attenuation whatsoever; it literally just mirror the spectrum. Repeating samples OTOH is equivalent to a box-filter which is very poor filter in the sense that it has poor attenuation, yet rolls of your high-frequencies quite a bit... so you will get a lot better results if you don't bother and just use zero-stuffing and a proper "brick-wall" filter.

Obviously there's no reason to actually use zero-stuffing in the literal sense; instead you want to use poly-phase filtering: split the filter into "branches" where each one only processes those taps that would have non-zero input for that "fractional offset" in some sense. So in code it kinda ends up looking as-if you were repeating samples (and then filtering each "repeat" with a different filter), but mathematically it's still just good old zero-stuffing.

Now, in theory you could also repeat samples and then take that into account as you design the filter that follows, such that it boosts the lost high-frequencies back to where they should be... but in most cases this is counter-productive since the poly-phase approach tends to be much more cost-effective in practice.

Post

Richard_Synapse wrote:Sure it works, but is there any reason to zero stuff when you can avoid it? E.g. if you just repeat samples you'll end up with a crude lowpass filter, which should improve the overall result substantially (combined with the actual anti-alias filter). As mystran pointed out, zero stuffing gives you no alias reduction whatsoever.
Repeating the samples will droop the frequency response, -3dB [edit: 2/pi, -3.9dB] at the top of the passband, -1dB at 0.6 [well, more like .55] way. Even if you don't care about that, the resulting lowpass filter is so lame that it's not worth implementing for the aliases, as you're going to need a proper filter anyway. Also, as mystran pointed out, normally you don't really put the zeros in anyway—there's not much sense in doubling samples.

Zero padding doesn't actually change anything except the sample rate. The aliased images are always there in digital audio, and by increasing the sample rate you raised the bar so that they now sit in the passband below half the sample rate. That's why you need to get rid of the aliases now revealed by the widened passband.
Last edited by earlevel on Wed Jul 22, 2015 6:09 pm, edited 1 time in total.
My audio DSP blog: earlevel.com

Post Reply

Return to “DSP and Plugin Development”