Is realtime oversampling a dirty/destructive process?

VST, AU, AAX, CLAP, etc. Plugin Virtual Effects Discussion
Post Reply New Topic
RELATED
PRODUCTS

Post

@midnight wrote:Some developers have said that realtime oversampling in plugins isn't the way to go about things. The reason is usually that "resampling is an intensive process best done offline" and that doing it with low latency will result in sub-par sound quality.
OK, to the first sentence, if a process needs it, the process probably needs to do it. So I'm not sure what "isn't the way to go about it" means, for most cases that a developer would use it.

About latency, I guess that you mean, for most folks, latency in addition to host buffer latency, and in a live-monitoring situation where latency compensation is not a fix. Obviously it's a lot bigger (relative) deal if you're talking about a sample-at-a-time system like Pro Tools in hardware.

But the latency doesn't need to be all that extravagant, even for linear phase, with a multirate oversampling. Again, if the oversampling is needed, well, it's needed.

And of course there is some measurable degradation, just as their is doing finite precision calculations in general—filters, whatever. My earliest commercial product with oversampling was Amp Farm, first released in 1996, and the horsepower on the Digi hardware back then was pretty pathetic by today's standards. Sure, that's an amp simulator and not hi-fi, but the point is that you make the trade-offs and you get the job done.

In the end, if the plug-in doesn't sound good, don't use it, and if it does, use it. There is nothing made for audio that doesn't have tradeoffs. Certainly the antique gear that we prize today (consoles, LA-2A, etc.) are far from perfect. And we don't prize them because they are old, we prize them because they sound good and survived the test of time. Other old stuff may have made different tradeoffs that people didn't like as well, and they faded into obscurity. Let it be the same with the digital stuff.
My audio DSP blog: earlevel.com

Post

bmanic wrote: I actually prefer minimum phase oversampling to the linear phase variant. I don't know why the linear phase oversampling modes annoy me so much but they do.

The human ear in general is not sensitive to phase differences in steady tones. You can test this yourself with some waveforms in my Galbanum Architecture Waveforms collection where I have offered saw spectrum (a=1/f) waveforms with various phase shifts of the partials. There is generally no perceivable difference so long as the rest of the signal path down line is linear (i.e. no distortion).

Phase differences are mostly/only perceptible on transient signals. Linear phase creates a symmetric impulse response which results in both pre-ringing and post ringing. Minimum (and more) phase results only in post-ringing. Pre-ringing is more readily perceivable as the transient itself tends to psycho-acoustically mask post ringing making it far less perceivable. Furthermore, linear phase filters are not generally causal and therefor don't occur in the real/analog world so we as humans are much more trained to expect the effects of minimum (or more) phase filters...

Worrying about preserving phase exactly in something like a reverb is silly as reverb is already a giant network of processes that F-up the phase of the signal in very pleasing ways, as Sean also said... Indeed the fundamental ingredient of many reverb designs, the AllPass filter, is designed to effect ONLY the phase of the signal and nothing else. Additionally, as Sean also stated, minimum phase filters have the most phase shift at the cutoff freq so in the case of OS filters we are generally taking about most of the phase distortion occurring outside or right at the edge of human hearing. Additionally, in reverb usually there is substantial EQ and Damping action that removes most of this energy anyway...

Post

I know all that.. we were talking oversampling in general here. Not just in reverbs. :)

I'm mainly sensitive to exactly what you said, the transients. It's hard to explain how I hear it (or not.. in several ABX tests I've failed miserably and in some I seemed to have clearly heard it). It feels like the signal is a tiny bit compressed and "strained" whereas the minimal phase implementations are more "loose".

In many of the Voxengo plugins where you can easily switch between linear phase or minimum phase oversampling you can easily AB these. Wish all coders gave this option.

However, we are definitely talking about very subtle things here, even by my standards. :hihi: :P

Cheers!
bManic
"Wisdom is wisdom, regardless of the idiot who said it." -an idiot

Post

Nice thread here , i do use oversampling on VCC and somehow i feel it do phase issues too , anyone else ?In latest mixes with all that analog modelling plugins i have phase issues before with complete digital plugins witch emulates nothing i never had , do i do something wrong with them ? First time i ever had issues was with Waves SSL and Waves plugins in general ...so i don't use any of em ! Now again i do have those problems with uad liquid mix and other analog gear modeled plugins everything become small and with phase issues , i never had something like that with fab filter or even daw stock plugins !

Post

bmanic wrote:I know all that.. we were talking oversampling in general here. Not just in reverbs. :)
Ya, I know... :) My post was not directed at you specifically, I just quoted your post as a departure point for my own... It's just general comments on various related topics.
bmanic wrote: I'm mainly sensitive to exactly what you said, the transients. It's hard to explain how I hear it (or not.. in several ABX tests I've failed miserably and in some I seemed to have clearly heard it). It feels like the signal is a tiny bit compressed and "strained" whereas the minimal phase implementations are more "loose".

In many of the Voxengo plugins where you can easily switch between linear phase or minimum phase oversampling you can easily AB these. Wish all coders gave this option.
In our case we do have that option behind the scenes. We could easily expose it, but it will add substantial latency--like 500 to 1000 or so samples... This could be compensated by making Pre-delay less, but this assumes that there is at least 500 to 1000 samples of pre-delay in the current preset, which there is not always. So it would either add extra/excessive pre-delay or it would require a large latency in our case...

bmanic wrote: However, we are definitely talking about very subtle things here, even by my standards. :hihi: :P
Cheers!
bManic


Yes. Devil is in the details. :D

Post

Just my 5c on this topic.

Q: Why oversampling?
A: To remove aliasing. If nonlinear processing adds harmonics in (freq.range) A, they're mirrored back to A and it sounds like dirt added. So let's make 4x oversampling: A B C D. Harmonics from A added to B, C, D and then mirrored back to C B A. In A they're very low so can be ignored. But stopband filter must be applied in downsampling stage to completely filter out B C D and leave A untouched. (4x oversampling is only an example here, it can be 3x for example)

Q: Can this process affect low-significant bits of signal and make sound dirty or affect space information (sence of depth)?
A: Yes. Practically 32-bit (floating point) precision is not enough (you can notice the lost of depth). You should use 64-bit precision. It leads to 2 times more CPU power used.

Q: How sharp should be this stop-band filter?
A: Very sharp. It should not affect A up to 21 kHz and must fall to -100 dB at the very beginning of B. In the other case frequency response of A can be damaged and also aliasing will be reduced just a bit but not completely removed.

Q: Can this filter has zero latency?
A: Yes, for low frequencies. But high frequencies will be delayed. The sharper filter is, the more they are delayed. It can "blur" percussive sounds and remove "in your face" effect from the mix.

Q: Can we have linear phase filter with small number of taps and latency introduced (32 taps for example)?
A: Such short linear phase filter can affect high frequencies by "ringing" like response which doesn't sound pleasant.

Q: BTW, can you determine -3 dB change at 21 kHz?
A: Yes, it affects transients. They become "softer".

Q: Why plugin DAW host compensated latency is not useful?
A: I want to use some plugins in live recording situations. For example, to apply some compressor to vocal monitoring, apply safety limiter to master bus when recording overdubs, etc.

Q: What about Molot with its 2 samples latency in "fast" oversampling mode?
A: Molot has polynomial interpolation to calculate samples between (the reason of latency). Also it uses very soft stopband filter (just 2nd order) so it reduces aliasing just by a tiny amount of dB but it has very small amplitude/phase distortions at high frequencies and I like how it colors the sound.

So my conclusion is: for zero latency it should be some kind of balance between high-frequency amplitude/phase distortion and the amount of aliasing reduced.

PS. my answers are a bit flat and they don't pretend to be absolute, but in short I think you find them useful. :-)
Vlad from Tokyo Dawn Labs

Post

My 2c:

Oversampling is a trade-off.

There is some kind of filtering inherent to an oversampled process, and at 44k1, that filtering is often audible.
But maybe less audible than the aliasing consequences of your process.

So:
- best solution: find ways to avoid generation of aliasing without oversampling
- medium solution: offer oversampling, so that the user has a way to swap out the aliasing for broken top-end and phase distortion.
- worst solution: do nothing and hope no one notices.

Dave.
[ DMGAudio ] | [ DMGAudio Blog ] | dave AT dmgaudio DOT com

Post

DaveGamble wrote: - best solution: find ways to avoid generation of aliasing without oversampling
word!
follow me on Image

Post

thanks to all developers Valhalla, vladg, Galbanum, DMG and bootsie for chipping in - very interesting
Presets for u-he Diva -> http://swanaudio.co.uk/

Post

There are different oversampling algos. With the "cheap and dirty" ones you lose quality, mainly high frequency response. The better sounding ones are CPU intensive. One option is to use "cheap" upsampling and high quality downsampling algos.

Post

izonin wrote:There are different oversampling algos. With the "cheap and dirty" ones you lose quality, mainly high frequency response. The better sounding ones are CPU intensive. One option is to use "cheap" upsampling and high quality downsampling algos.
Try defending that position - it doesn't end well.

The implication of that claim is that there's some asymmetry about all this. And there ain't.

Unless you're talking SPECIFICALLY about EQ, where you're not actually generating any harmonics, in the which case:
1. Why are you oversampling in the first place? You're not very likely to improve your top-end response by painting an antialias filter over it. Unless your EQ design algos are PROPERLY retro (RBJ/Datorro).
2. There's no need to filter more than once, in fact it's pointless to do so. Just zero-pad the samples (which generates an aliased spectrum) and polyphase your implementation, then use one decent downsampling stage.

Dave.
[ DMGAudio ] | [ DMGAudio Blog ] | dave AT dmgaudio DOT com

Post

DaveGamble wrote:My 2c:

- medium solution: offer oversampling, so that the user has a way to swap out the aliasing for broken top-end and phase distortion.

Dave.
yeah often oversampling make thing sound worst..depending the process aliasing is more transparent than how oversampling affect transients in some plugins

Post

The term "over-sampling" isn't particularly adequate to describe most anti-aliasing strategies anyway.

I prefer thinking in terms of multi-rate algorithms, where everything is sampled "just right". And that doesn't mean that the signal needs to suffer from any brute force oversampling side-effects such as reduced bandwidth and impulse "pollution".

From my observations, it seems that the effect of aliasing in control signals has often much more severe sonic consequences than audio-signal aliasing. This is particularly evident in dynamics processors, where the effect of direct audio aliasing is negligible for certain tasks since the process only doubles the bandwidth in the worst case.

BTW, for those interested, certain applications "under-sample" in order to improve precision issues or increase the runtime performance (such as IFIRs or advanced critically sampled filter-banks).
Last edited by FabienTDR on Sun Mar 03, 2013 4:12 pm, edited 1 time in total.
Fabien from Tokyo Dawn Records

Check out my audio processors over at the Tokyo Dawn Labs!

Post

DaveGamble wrote: 2. There's no need to filter more than once, in fact it's pointless to do so. Just zero-pad the samples (which generates an aliased spectrum) and polyphase your implementation, then use one decent downsampling stage.
Sounds logical. Will experiment with that. My approach is empirical and I always chose the architecture that gives me the best sound quality.

Post

bootsie wrote:
DaveGamble wrote: - best solution: find ways to avoid generation of aliasing without oversampling
word!


How would a plugin like Fabfilter Saturn or similar distortion plugin be able to create alias-free distortion without oversampling?

You can get away with a touch, but for adding a lot of distortion, you need to upsample to 192-384khz to approach analog bandwidth for the harmonics?
Has anybody ever really been far even as decided to use even go want to do look more like?

Post Reply

Return to “Effects”