Wavetable Morphing and Aliasing

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

I'm working on a wave morphing synthesizer. If the wavetable "function" is f(x), then the morphed table is f(g(x)), with some morphing function g(x) which stretches or squeezes the table. Apart from the usual wavetable considerations, like bandlimiting / mipmapping, this of course poses more problems regarding aliasing, since the original waveform is played back faster in places where g'(x) > 1.

I've trying to get a feasible solution for this problem and had two ideas in mind, I'm looking for some more feedback or ideas how else it could be achieved.

Idea 1: Pre-morph the tables
Essentially, every time a new wavetable or morphing function is selected, create a set of morphed tables. That is, use a "morphing amount" 0.0, 0.1, .... 1.0 and then create bandlimited wavetables from these, which will be interpolated at runtime.

One problem here is that it is a lot of wavetables, even for 20 sampled steps of the morph amount resulting in 60MB of memory usage per oscillator. The bigger problem, however, seem to be phasing issues between two interpolated morphing amounts, resulting in wobbling amplitudes in the waveform.

Idea 2: Use oversampling
This is the much more obvious idea, of course. Just morph the wavetable at runtime and provide oversampling, to suppress aliasing. I am wondering if this is feasible: How much oversampling would be required to provide clean output? Can this be a performance breaker?

Another way of improving on this idea would be to check for the derivative value of g(x). If g'(x) = 2 for example, I would expect the waveform to be played back by double the speed in that place. Hence I could use a different bandlimited table to begin with.


Ok, those are my ideas so far, I am happy for any input!

Post

I think the situation here is basically the same as with FM: in theory you get infinite sidebands. So the question you might want to ask is "how much do I need to oversample to get clean enough results" which is a question best answered by trial and error. Choosing a wavetable mipmap with less bandwidth can probably mitigate the aliasing, but keep in mind that varying the mipmap at audio rates will also give you a bunch of sidebands.

Synthesizing new wavetables on the fly (eg. using FFT) is also possible and although it's quite expensive if you want a high modulation rate, it's something to consider. Basically if you perform the phase-distortion on a single-cycle waveform, then it will alias, but because the sampling is synchronous with the waveform the aliasing will fall on top of valid harmonics (ie. the result is "wrong" but you avoid the inharmonic stuff) and you can then FFT, clear out bins above the bandwidth limit and either IFFT or use additive synthesis to reconstruct. The main issue (other than the "wrong" results) is that it gets expensive if you want a high modulation rate.

In any case, I don't think there's any truly elegant solution here, so you'll probably have to try and come up with a compromise that's good enough for your purposes.

Post

mystran wrote: Sat May 08, 2021 12:37 pm I think the situation here is basically the same as with FM: in theory you get infinite sidebands.
It's also the same as waveshaping: g(x) is the "input signal", being waveshaped by f(x). FM and waveshaping have a lot in common, so techniques can often be shared between the two.

This means you could use things like anti-derivative aliasing-reduction, which combine very well with low-degree oversampling. You just need an integrated version of f(x), which could be done numerically if there isn't a neat expression for it.

Post

It's far better to not introduce aliasing than it is to try and remove it after the fact. That said, consider hybrid approaches, as well. You may be able to optimize the pre-morphing to, say, cut aliasing in half, followed by an AA filter to mitigate the remainder.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

Ok cool, thanks everybody for the input! For the time being, I will just prototype the oversampling approach. The "iFFT on demand" is a thing I didn't consider, which might help as well.
signalsmith wrote: Sat May 08, 2021 1:18 pm This means you could use things like anti-derivative aliasing-reduction, which combine very well with low-degree oversampling.
Do you happen to have resources on anti-derivative aliasing reduction? I am not familiar with this method. I found a paper Antiderivative Antialiasing for Memoryless Nonlinearities, but have yet do dig into it.

Post

This was a good introduction:
https://jatinchowdhury18.medium.com/pra ... 847167f510

There are linked papers too if I remember correctly.

Post

rafa1981 wrote: Sun May 09, 2021 1:18 pm This was a good introduction:
https://jatinchowdhury18.medium.com/pra ... 847167f510

There are linked papers too if I remember correctly.
Cool, thanks so much. This looks super promising - it even eliminates the need for me to derive every morphing function I will come up with analytically, since it is the wavetables which are being integrated. As stated before, this can easily be done numerically.

I'll start with the simplest prototype and work my way up from there. :tu:

Post

I wanted to report back:

I implemented the combo of oversampling and ADAA, where the morphing function is "waveshaped" by the actual wavetable via ADAA. The whole thing sounds pretty good to my ears, even at 2x oversampling. I have not yet measured aliasing with any quantitative measures though. It isn't really possible atm, because the waveshaping introduces a DC offset, which clutters frequency plots to begin with. So I will have to get rid of it first.

That being said, the whole operation is pretty taxing performance-wise at the moment: A single static oscillator clocks my CPU at around 8%. As a next step, I am looking into how to vectorize the sample based code I have with AVX, since later I plan on adding multiple voices via unison to a single oscillator.

Another thing that came to my mind while developing: The approach can be viewed as taking morphing function as a direct input to the ADAA wavetables. These functions are of course highly aliased (i.e. a clamping function). The ADAA does nothing to prevent that aliasing, since it happens before ADAA is even employed. Of course I still have the oversampling which helps with that, but it might be needed to generate anti aliased morph functions as well. I'm thinking bandlimiting them or even employing another layer of ADAA before the current one.

Post Reply

Return to “DSP and Plugin Development”