Login / Register 0 items | $0.00 New @ KVR
User avatar
Richard_Synapse
KVRian
 
845 posts since 19 Dec, 2010

Postby Richard_Synapse; Thu Aug 09, 2018 12:57 am Re: Wavetable oscillator implementation

matt42 wrote:I wonder what kind of interpolation schemes are used? Especially if working at base sample rate.

From some tests I did resampling a sine wave I found that, IIRC, a sinc kernel length of over 70 was needed to keep interpolation artifacts very low. I was designing the interpolator for high quality conversions, so the keeping the noise floor below 100 db would, likely, be overkill for a synth. What noise floor would be acceptable in general for an oscillator? Below 60 db?


If we take analog hardware as a reference for how much artifacts are acceptable, then 60 dB is already very good.

But the question is really impossible to answer, as it is subjective, and furthermore depends on other factors, e.g. the overall architecture of the synth. There is no evidence that a noise-free waveform with perfect harmonics sounds better than a waveform with (minor) artifacts. From a sound design perspective, both clean and dirty oscillators can be put to good use 8)

Richard
Synapse Audio Software - www.synapse-audio.com
User avatar
Urs
u-he
 
22368 posts since 7 Aug, 2002, from Berlin

Postby Urs; Thu Aug 09, 2018 1:10 am Re: Wavetable oscillator implementation

karrikuh wrote:
Urs wrote:
mtytel wrote:
Urs wrote:With scannable wavetables of 100+ frames, instead of the waveform I'd simply store the spectrum of each. Then, each time after processing 256 or so samples, I'd copy a spectrum to a new table, zero any harmonics which would alias, run an iFFT and crossfade between old table and new until another is due.


I think it depends what features you want. I've got a unison feature where the voices scan the wavetable at different points. Running an iFFT for every voice, every 256 samples would probably make a CPU (c/f)ry.

I do this with unison, too. The crossfade between old and new is independent of oscillator phase / readout position. (Zebra has worked like this since... forever...)

What size of FFT are we talking about? I played with this idea in the past but figured it to be too expensive. I was assuming a worst-case size of 16384 (1024 harmonics and 8x oversampling to reduce interpolation artifacts). Maybe with this scheme, one should ditch the FFT-oversampling and use the critically bandlimited 2048-sample wavetable with a higher-order interpolation?

When oversampling you can multiply those 256 samples by your oversampling factor.

I use this method to *avoid* the need for oversampling. So I do 2048 sample sized tables.
User avatar
Urs
u-he
 
22368 posts since 7 Aug, 2002, from Berlin

Postby Urs; Thu Aug 09, 2018 1:20 am Re: Wavetable oscillator implementation

matt42 wrote:I wonder what kind of interpolation schemes are used? Especially if working at base sample rate.

With a wavetable size of 2048 samples which is always bandlimited at around Nyqvist, I find linear interpolation sufficient.

Many years ago there was a thread where one software vendor compared various sample playback engines from his and other software. In that thread, or in relation to it, the developer of the best algorithm posted the SSE-optimized code for his windowed sinc kernel based interpolator. IIRC kernels of 11 or 15 were sufficient for transposing arbitrary material several octaves.
matt42
KVRian
 
1044 posts since 9 Jan, 2006

Postby matt42; Thu Aug 09, 2018 3:35 am Re: Wavetable oscillator implementation

Urs wrote:With a wavetable size of 2048 samples which is always bandlimited at around Nyqvist, I find linear interpolation sufficient.
Thanks Urs, your reply made me realize I hadn't thought it through properly.

Of course for a wave table the resolution of the table can be increased as much as needed.

The really silly part is that in the sinc interpolation test I mentioned I implemented the kernel as a high res table with, yup, linear interpolation :dog:

I blame this level of muddy thinking on the fact I'm recovering from a long haul flight's jet lag

Also, thanks Aleksey and Richard for your replies. And Yes, Richard, I agree there's definitely a place for noisey oscillators :D
earlevel
KVRist
 
473 posts since 4 Apr, 2010

Postby earlevel; Thu Aug 09, 2018 9:02 am Re: Wavetable oscillator implementation

karrikuh wrote:
Urs wrote:What size of FFT are we talking about? I played with this idea in the past but figured it to be too expensive. I was assuming a worst-case size of 16384 (1024 harmonics and 8x oversampling to reduce interpolation artifacts). Maybe with this scheme, one should ditch the FFT-oversampling and use the critically bandlimited 2048-sample wavetable with a higher-order interpolation?

For table size, you can consider just the harmonics needed, at the lowest frequency, no extra oversampling needed. Why? Because at low frequencies, that sample resolution is high. At higher frequencies, the relative oversampling for a constant table size increases.

So, for 40Hz, you need a 2048 table to enjoy all audible harmonics of a sawtooth wave. It's not oversampled, played back at the native rate of the table, but your sample resolution is high—the phase you want the value for is always within half of 1/2048th of the table. At high frequencies, a full cycle is just a few samples, around 4 samples at 11k, but at that rate it's oversampled about 512x for a 2048 table.

2048 or 4096 are good choices for audio rate, the main reason to go lower is so you can sweep a bright waveform (sawtooth) to sub audio without the bandlimited nature becoming apparent.
My audio DSP blog: earlevel.com
mystran
KVRAF
 
4979 posts since 11 Feb, 2006, from Helsinki, Finland

Postby mystran; Thu Aug 09, 2018 9:45 am Re: Wavetable oscillator implementation

The linear interpolation artifacts can be sort of audible with very low fundamentals when the harmonic limit from the wavetable size starts getting into the audible range, but since most waveforms have some spectral fall-off anyway, the artifacts tend not to be all that obvious unless you are specifically looking for them and/or doing an ABX vs. something better. If you also force a little bit of low-pass taper into the spectra just before the harmonic limit (whether it's because of the harmonics hitting Nyquist or the wavetable size), this is even less obvious.
Image <- plugins | forum
User avatar
karrikuh
KVRist
 
292 posts since 6 Apr, 2008

Postby karrikuh; Thu Aug 16, 2018 1:04 am Re: Wavetable oscillator implementation

earlevel wrote:
karrikuh wrote:
Urs wrote:What size of FFT are we talking about? I played with this idea in the past but figured it to be too expensive. I was assuming a worst-case size of 16384 (1024 harmonics and 8x oversampling to reduce interpolation artifacts). Maybe with this scheme, one should ditch the FFT-oversampling and use the critically bandlimited 2048-sample wavetable with a higher-order interpolation?

For table size, you can consider just the harmonics needed, at the lowest frequency, no extra oversampling needed. Why? Because at low frequencies, that sample resolution is high. At higher frequencies, the relative oversampling for a constant table size increases.

So, for 40Hz, you need a 2048 table to enjoy all audible harmonics of a sawtooth wave. It's not oversampled, played back at the native rate of the table, but your sample resolution is high—the phase you want the value for is always within half of 1/2048th of the table. At high frequencies, a full cycle is just a few samples, around 4 samples at 11k, but at that rate it's oversampled about 512x for a 2048 table.

2048 or 4096 are good choices for audio rate, the main reason to go lower is so you can sweep a bright waveform (sawtooth) to sub audio without the bandlimited nature becoming apparent.


Thanks, never thought that way. So it seems using a constant number of samples per cycle independent of playback frequency is the way to go. Right now, I'm using cycle sizes inversely proportional to frequency (in steps of 1/2 octave), that is, with a constant oversampling factor. Switching to constant size would greatly simplify my code and potentially make it faster, too...
earlevel
KVRist
 
473 posts since 4 Apr, 2010

Postby earlevel; Thu Aug 16, 2018 9:35 am Re: Wavetable oscillator implementation

karrikuh wrote:Thanks, never thought that way. So it seems using a constant number of samples per cycle independent of playback frequency is the way to go. Right now, I'm using cycle sizes inversely proportional to frequency (in steps of 1/2 octave), that is, with a constant oversampling factor. Switching to constant size would greatly simplify my code and potentially make it faster, too...

One way to minimize memory that seemed to work well was use minimum tables (inversely proportional to the base wavetable frequency, as you noted), but only up to a point—keep the top tables constant size (64 samples, for instance) so they get progressively better resolution for higher frequencies. In my website (multi-part) article and code I allow for experimentation with constant or proportional table, mainly for educational purposes. A constant table size is the way unless you're on memory-constrained hardware and not a regular computer.

I'll elaborate on one point, for the sake of completeness. In my earlier explanation I said that for the low table, no oversampling is fine because you have such good resolution there. That it completely true for the low harmonics, but of course resolution drops for higher harmonics, halved for every octave. For instance, if you create a waveform that's only the 500th harmonic...well, you could only play or hear it in the very low end of the keyboard so it would be lame, but it would be fairly noisy too—lack of resolution. That's not realistic, but the same is true of the upper harmonics of a sawtooth, when played in the lowest octaves. However, that error is masked by the stronger (and clean) lower harmonics in a musically useful waveform and in practice it can be ignored. Most smart people (Urs mentioned it this thread, rbj has mentioned it to me in the past) happily use a single cycle for the lowest octave. Of course there's always 16k or 32k tables for the paranoid! Memory is cheap, and your bank of waves can be stored as a single cycle table and expanded as needed. :wink:
My audio DSP blog: earlevel.com
mystran
KVRAF
 
4979 posts since 11 Feb, 2006, from Helsinki, Finland

Postby mystran; Fri Aug 17, 2018 2:33 am Re: Wavetable oscillator implementation

earlevel wrote:One way to minimize memory that seemed to work well was use minimum tables (inversely proportional to the base wavetable frequency, as you noted), but only up to a point—keep the top tables constant size (64 samples, for instance) so they get progressively better resolution for higher frequencies.


I feel like the situations where this would really be useful are somewhat limited though. If you have something like 2 tables per octave over 10 octaves with 2k samples in a 1D array of 256 slices, then that's about 40MB of data in single precision. While that might not be an entirely trivial amount, it's not usually a huge problem these days either. But then if you want another dimensions (eg. let's say the user can pick some spectral effect to be calculated) with another 256 steps for example, suddenly we're talking about 512MB even without any mipmaps... so now you're probably stuck computing the stuff on the fly anyway.

So I feel like most of the time the memory is either not a problem, or it's such a huge issue that some minor trimming is not going to save it. I suppose there are some scenarios where such memory footprint optimizations could be useful, but it still feels to me like it's way more limited in practice than what it might sound at first thought and really might not be worth the quality loss (or additional CPU to combat that) in practice.

YMMV. :)
Image <- plugins | forum
earlevel
KVRist
 
473 posts since 4 Apr, 2010

Postby earlevel; Fri Aug 17, 2018 9:01 am Re: Wavetable oscillator implementation

mystran wrote:
earlevel wrote:One way to minimize memory that seemed to work well was use minimum tables (inversely proportional to the base wavetable frequency, as you noted), but only up to a point—keep the top tables constant size (64 samples, for instance) so they get progressively better resolution for higher frequencies.


I feel like the situations where this would really be useful are somewhat limited though. If you have something like 2 tables per octave over 10 octaves with 2k samples in a 1D array of 256 slices, then that's about 40MB of data in single precision. While that might not be an entirely trivial amount, it's not usually a huge problem these days either. But then if you want another dimensions (eg. let's say the user can pick some spectral effect to be calculated) with another 256 steps for example, suddenly we're talking about 512MB even without any mipmaps... so now you're probably stuck computing the stuff on the fly anyway.

So I feel like most of the time the memory is either not a problem, or it's such a huge issue that some minor trimming is not going to save it. I suppose there are some scenarios where such memory footprint optimizations could be useful, but it still feels to me like it's way more limited in practice than what it might sound at first thought and really might not be worth the quality loss (or additional CPU to combat that) in practice.

YMMV. :)

Sure—you omitted, from the paragraph you quoted, "In my website (multi-part) article and code I allow for experimentation with constant or proportional table, mainly for educational purposes. A constant table size is the way unless you're on memory-constrained hardware and not a regular computer." Just want to make sure you noticed that.

Yes, little reason to skimp on memory, constant table sizes are convenient, and the increased oversampling is what you want. Still, it's good to understand if you need to do it in a dedicated module. Also, the memory savings can be large if you decide you want to stick with wavetables solely (obviously, there are other ways), but be able to run the audio down to sub-audio (love the sound a sawtooth that retains all harmonics as you sweep it down, annoying when it doesn't). You could have 16k or more for the lowest, and scale back.

But mainly, I think it's good to think about (and experiment with) what's acceptable and why—that with constant tables we're essentially saying we can accept little or no oversampling at low frequencies, but need more as we go higher. For instance, I think many (including myself) consider 2k tables with linear interpolation as the minimum but perfectly acceptable solution for an audio range oscillator for arbitrary waveforms a synth. Yet if you think about the highest harmonics for the low fundamental frequency range, "in theory" it's unusable, even if fine in practice.
My audio DSP blog: earlevel.com
mystran
KVRAF
 
4979 posts since 11 Feb, 2006, from Helsinki, Finland

Postby mystran; Fri Aug 17, 2018 9:30 am Re: Wavetable oscillator implementation

earlevel wrote:Sure—you omitted, from the paragraph you quoted, "In my website (multi-part) article and code I allow for experimentation with constant or proportional table, mainly for educational purposes. A constant table size is the way unless you're on memory-constrained hardware and not a regular computer." Just want to make sure you noticed that.


Yeah I suppose I might have failed to notice that. I did (sort of) read the whole thing, but I admit I wasn't very throughout. :D

But mainly, I think it's good to think about (and experiment with) what's acceptable and why—that with constant tables we're essentially saying we can accept little or no oversampling at low frequencies, but need more as we go higher. For instance, I think many (including myself) consider 2k tables with linear interpolation as the minimum but perfectly acceptable solution for an audio range oscillator for arbitrary waveforms a synth. Yet if you think about the highest harmonics for the low fundamental frequency range, "in theory" it's unusable, even if fine in practice.


I feel like there are good arguments why 2k with linear ends up as a sort of "sweet spot" overall. It's obviously not perfect, but if you wanted to spend twice the memory, often times doubling the number of mips is a better idea... and while you could use something like cubic hermites.. that's only really going to make a difference for the high-frequency content of low-fundamental sounds, which quite often gets masked further by low-pass filtering anyway.
Image <- plugins | forum
User avatar
Andrew Souter
KVRAF
 
2307 posts since 12 Sep, 2008

Postby Andrew Souter; Sat Aug 18, 2018 3:15 am Re: Wavetable oscillator implementation

earlevel wrote:
2048 or 4096 are good choices for audio rate, the main reason to go lower is so you can sweep a bright waveform (sawtooth) to sub audio without the bandlimited nature becoming apparent.


...however a "bright" waveform such as a raw saw or sqaure, has a 1/f amplitude ratio, so the 1024th partial is down ~60dB anyway... and one could project that the average waveform has even less energy at the 1024th partial.

2048 samples, 1024 partials, is enough 99% of the time...
earlevel
KVRist
 
473 posts since 4 Apr, 2010

Postby earlevel; Sat Aug 18, 2018 9:10 am Re: Wavetable oscillator implementation

Andrew Souter wrote:
earlevel wrote:
2048 or 4096 are good choices for audio rate, the main reason to go lower is so you can sweep a bright waveform (sawtooth) to sub audio without the bandlimited nature becoming apparent.


...however a "bright" waveform such as a raw saw or sqaure, has a 1/f amplitude ratio, so the 1024th partial is down ~60dB anyway... and one could project that the average waveform has even less energy at the 1024th partial.

2048 samples, 1024 partials, is enough 99% of the time...

I'm not sure whether you got my point—you seem to have misinterpreted it, but I might be wrong.

What I meant was...play this on your good monitors or headphone (unfortunately, it's mp3 for better browser compatibility, but good enough to get the point across):

http://www.earlevel.com/main/wp-content/uploads/2012/06/saw-sub-audio-test-20-20k-2048-20s.mp3

By 5 seconds you're already losing apparent harmonics, by 8, 9, 10 it sounds like you're sweeping a lowpass filter down with it...A real sawtooth would not get dull as you dropped.

But here's with 32k tables:

http://www.earlevel.com/main/wp-content/uploads/2012/06/saw-sub-audio-test-20-20k-2048-+-32768-ramp-20s.mp3

As I said, 2048 is a good minimum number. I like going another octave with 4096, gives full bandwidth to 20 Hz, and allows a bit lower before crapping out.
My audio DSP blog: earlevel.com
earlevel
KVRist
 
473 posts since 4 Apr, 2010

Postby earlevel; Sat Aug 18, 2018 9:14 am Re: Wavetable oscillator implementation

(accident repost)
My audio DSP blog: earlevel.com
User avatar
noizebox
KVRer
 
20 posts since 19 Nov, 2012, from Stockholm, Sweden

Postby noizebox; Sun Aug 19, 2018 9:36 am Re: Wavetable oscillator implementation

earlevel wrote:One way to minimize memory that seemed to work well was use minimum tables (inversely proportional to the base wavetable frequency, as you noted), but only up to a point—keep the top tables constant size (64 samples, for instance) so they get progressively better resolution for higher frequencies.


Very interesting idea, will try that when I have time. Expanding the top octaves costs very little in terms of memory use. My rationale for using mipmapped tables though was partly that while memory is cheap, caches are not, and having a constant cycle length for all octaves will make you skip a lot of samples when reading (though you get very good accuracy). Mipmapped tables makes sure that you are reading data reasonably sequentially (skipping 1 sample at most or something) which should play nice with cache predictors. And for the highest octaves, the whole cycle should fit in 1 or a few cache lines anyway.

In my own experiments (nothing that has resulted in a complete synth yet though) I've generated mip mapped tables, 1 per octave, with 2x oversampling (no oversampling wasn't really good enough with only linear interpolation), for a cycle size of 4096 samples for the lowest octave, and read with linear interpolation at 2x oversampling. Also possible to shift everything 1 octave up and use linear interpolation without any oversampling. Though that causes noticable artefacts when sweeping as it jumps from 1 octave to another. Total table size 8096 samples stored as floats, 32kB per waveform.
PreviousNext

Moderator: KVR Moderators (Main)

Return to DSP and Plug-in Development