earlevel wrote:One way to minimize memory that seemed to work well was use minimum tables (inversely proportional to the base wavetable frequency, as you noted), but only up to a point—keep the top tables constant size (64 samples, for instance) so they get progressively better resolution for higher frequencies.
I feel like the situations where this would really be useful are somewhat limited though. If you have something like 2 tables per octave over 10 octaves with 2k samples in a 1D array of 256 slices, then that's about 40MB of data in single precision. While that might not be an entirely trivial amount, it's not usually a huge problem these days either. But then if you want another dimensions (eg. let's say the user can pick some spectral effect to be calculated) with another 256 steps for example, suddenly we're talking about 512MB even without any mipmaps... so now you're probably stuck computing the stuff on the fly anyway.
So I feel like most of the time the memory is either not a problem, or it's such a huge issue that some minor trimming is not going to save it. I suppose there are some scenarios where such memory footprint optimizations could be useful, but it still feels to me like it's way more limited in practice than what it might sound at first thought and really might not be worth the quality loss (or additional CPU to combat that) in practice.
Sure—you omitted, from the paragraph you quoted, "In my website (multi-part) article and code I allow for experimentation with constant or proportional table, mainly for educational purposes. A constant table size is the way unless you're on memory-constrained hardware and not a regular computer." Just want to make sure you noticed that.
Yes, little reason to skimp on memory, constant table sizes are convenient, and the increased oversampling is what you want. Still, it's good to understand if you need to do it in a dedicated module. Also, the memory savings can be large if you decide you want to stick with wavetables solely (obviously, there are other ways), but be able to run the audio down to sub-audio (love the sound a sawtooth that retains all harmonics as you sweep it down, annoying when it doesn't). You could have 16k or more for the lowest, and scale back.
But mainly, I think it's good to think about (and experiment with) what's acceptable and why—that with constant tables we're essentially saying we can accept little or no oversampling at low frequencies, but need more as we go higher. For instance, I think many (including myself) consider 2k tables with linear interpolation as the minimum but perfectly acceptable solution for an audio range oscillator for arbitrary waveforms a synth. Yet if you think about the highest harmonics for the low fundamental frequency range, "in theory" it's unusable, even if fine in practice.