Best sound quality - run 96khz samplerate, do not oversample.

VST, AU, AAX, CLAP, etc. Plugin Virtual Effects Discussion
Post Reply New Topic
RELATED
PRODUCTS

Post

Urs wrote:The problem is that the added harmonics in a signal chain add to each other. That is, a nonlinear process will also add harmonics to the already added ones from the previous process. So each time you double the samplerate all you get is one more plugin in the signal chain to end up with the same level of artifacts.
I have seen you post this kind of thing before, and I find it very interesting. It got me thinking in fact, that tracking at 48KHz then up-sampling to 96KHz for mix-down might even be better than tracking at 96KHz in the first place... would you agree with that? Assuming a high quality SRC of course, eg iZotope 64 bit.

Or, what if I tracked at 96KHz, then set up a batch process that re-sampled all those files to 48KHz and back up again to filter out the top octave?

Post

IIRs wrote:
Urs wrote:The problem is that the added harmonics in a signal chain add to each other. That is, a nonlinear process will also add harmonics to the already added ones from the previous process. So each time you double the samplerate all you get is one more plugin in the signal chain to end up with the same level of artifacts.
I have seen you post this kind of thing before, and I find it very interesting. It got me thinking in fact, that tracking at 48KHz then up-sampling to 96KHz for mix-down might even be better than tracking at 96KHz in the first place... would you agree with that? Assuming a high quality SRC of course, eg iZotope 64 bit.

Or, what if I tracked at 96KHz, then set up a batch process that re-sampled all those files to 48KHz and back up again to filter out the top octave?
Good question.

I guess one should try that. Record a sine at 15 kHz with the components you have, and after the whole production chain, look at the spectrum in both cases.

One problem we had was that some audio interfaces alias themselves. We've lost a week on corrupted measurements until we bought a better interface.

#---

I deal - as you know - mostly with synths and multieffects. I always thought it was common to downsample/upsample between stages within the architecture, or at least use some sort of steep lowpass filter in an oversampled scenario. But when we recently analysed some softsynths on the market, some quick tests revealed the opposite. In my observation, it's rather common not to filter inbetween.

One still gets a little extra attenuation with each factor of oversampling, but d'oh.

On the other hand side, one should never oversample certain EQs that use 32 bit floating point (such as my own Filterscape and many, many plugs out there). These suffer from rounding errors in very low frequencies, which get worse when oversampled.

;) Urs

Post

khanyz wrote:How much headroom do you allow within each process for it to behave as intended (dB/Bit Depth and Sample Rate)? How far above 20kHz would you place the filter for it not to encroach significantly on the intended range of use?

How much difference is there in the theoretical and practical values?
Hmmm... dunno. I use polyphase filters for up/downsampling. These work at Nyquist/2 (which is the new Nyquist after downsampling)

Post

ok so which should I use? 44 or 96?
Presets for u-he Diva -> http://swanaudio.co.uk/

Post

Urs wrote: On the other hand side, one should never oversample certain EQs that use 32 bit floating point (such as my own Filterscape and many, many plugs out there). These suffer from rounding errors in very low frequencies, which get worse when oversampled.
Hey, Urs,
It's all about project sample rate or plugins in-builtin oversampling system?
And what about 64 floating point plugins?

Post

analoguesamples909 wrote:ok so which should I use? 44 or 96?
48 :hihi:

Post

Urs wrote:Hmmm... dunno. I use polyphase filters for up/downsampling. These work at Nyquist/2 (which is the new Nyquist after downsampling)
EDIT: Sorry, edited to make sense of the nyquists.

What I'm trying to get at, and understand if my thinking is correct is this:

A process should filter it's input to make sure that it is within its range of use and filter it's output to remove any artifacts of its use. If it doesn't do it itself, then it may have to be done for it.

This is fair enough. However, I don't get the need to keep upsampling and downsampling if you can just work at the higher rate, but keep the frequency content in the acceptable range, even if it's nyquist/2 between processes. Surely the process of several upsampling/downsampling cycles will introduce artifacts and the use of filters near (the new) nyquist is also a compromise. Why not only worry about this when you have to, i.e. when converting to the destination format.

I don't see the point of using higher rates than the 88.2/96 band but I certainly see the point of using them. Even if it's just so the usuable content doesn't get anywhere near nyquist but the downsample is as simple (accurate) as possible (i.e. integer multiple).

Of course there's processing resource issues , and source samplerate conversions which will determine the overall rate used. However, assuming that your system can handle it and all your source material can be accurately enough converted to the overall samplerate, is there any need for the constant up/down sampling?

Cheers,
Nigel
Last edited by khanyz on Sun Oct 23, 2011 1:26 pm, edited 1 time in total.
I miss MindPrint. My TRIO needs a big brother.

Post

According to this reported study, people can't hear any difference between 16/44,1 and 24/96 anyway:

http://mixonline.com/recording/mixing/a ... _sampling/

It is a double blind study.

Anyone know if there has been any follow ups on this study that proves the opposite? Until there are, people who hear "great differences" will surely have a problem explaining the results.

Cheers

Post

IncarnateX wrote:According to this reported study, people can't hear any difference between 16/44,1 and 24/96 anyway:

http://mixonline.com/recording/mixing/a ... _sampling/

It is a double blind study.

Anyone know if there has been any follow ups on this study that proves the opposite? Until there are, people who hear "great differences" will surely have a problem explaining the results.

Cheers
This is interesting, but not what we are talking about here at all.

This thread is about the best way to minimize aliasing artifacts from non-linear plugs, while the study above is only testing the final delivery format.

Post

I've seen this play itself out to very caustic, argumentative results on other forums (but I'm assuming it's happened here before as well).
I'd say every 2 months, once the previous thread is forgotten
DOLPH WILL PWNZ0R J00r LAWZ!!!!

Post

It beats the repeats on TV :P
I miss MindPrint. My TRIO needs a big brother.

Post

I'd say both applies, Dan.

The OP was seeking for an ultimate solution in terms of "hich samplerate to run at" while both recording and post production. And here I'd like to backup both Ethan Winer and David Moran.


If we'd talk about "purely ITB" (no recording aka A/D conversion, no playback aka D/A conversion), then we can start dissecting things to bits. The we still need to "listen" to the material, and here the D/A conversion is a big point to the equasion.


Another thing that always made me wonder were certain mastering studios and their arrays of "oversamplers" if they worked with a combination of analog/digital (outboard gear) and pure digital (plugins) equipment. Is an upsampling or "oversampling" beyond 386kHz from a PC/MAC with a good AD/DA really necessary if the final material is limited to a certain dynamic range anyway?

Does recording at 96kHz really make a difference on the long run other than having a better sample cycle resolution?

Isn't the bitrate more important both while A/D and D/A conversion?



Again, do we really (and by that I really mean it) produce material that is as audiophile? Or can you hear bits flopping while listening to heavy metal riffs?
[ Mix Challenge ] | [ Studio Page / Twitter ] | [ KVRmarks (see: metering tools) ]

Post

the only good sampling rate is 528hz

http://www.youtube.com/watch?v=hqRF-k1SFDQ

I sell the whole set for only $999. CALL NOW.
& don't bother trying to sample them, today's soundcard samplerates aren't high enough to preserve the power of DNA integrity, love & miracles.
DOLPH WILL PWNZ0R J00r LAWZ!!!!

Post

I would not talk about audible advantage of higher sample rates.

As we get a more powerful computers, their power is more and more wasted if you are working at 44.1/48 sample rates exclusively. While there is usually an advantage exists for saturation effects and filters when they work at higher sample rates.

I'm just suggesting that host audio application should simply implement a differentiation between internal real-time, internal bounce, and external sample rates, and perform automatic resampling when necessary.

This isn't really a problem of audio interfaces - almost every interface supports 96 kHz and even 192kHz. Under the hood they may even work at 384kHz and perform digital downsampling.
Image

Post

khanyz wrote:What I'm trying to get at, and understand if my thinking is correct is this:

A process should filter it's input to make sure that it is within its range of use and filter it's output to remove any artifacts of its use. If it doesn't do it itself, then it may have to be done for it.
Correct!
This is fair enough. However, I don't get the need to keep upsampling and downsampling if you can just work at the higher rate, but keep the frequency content in the acceptable range, even if it's nyquist/2 between processes. Surely the process of several upsampling/downsampling cycles will introduce artifacts and the use of filters near (the new) nyquist is also a compromise. Why not only worry about this when you have to, i.e. when converting to the destination format.
I'll try an example (mind you, I'll use a shortcut to avoid bringing intermodulation distortion into the game):

Think of having a signal chain with two identical vintage equalisers in flat setting. Because they are vintage and blah they add the 3rd harmonic (possibly among other things), two octaves above the original frequency.

Now we have 96kHz sampling rate.

Our signal may have a loud presence of a 10 kHz signal (self oscillating synth filter?)

After the first compressor, that 10 kHz signal has got an harmonic overtone added two octaves above, at 10 x 4 = 40kHz, maybe 20dB lower. Fine. We have 48kHz bandwidth, the 40kHz are no problem.

But... we have a second equaliser. That one adds another 3rd harmonic. So our 40kHz harmonic gets its own harmonic added, at 4 x 40 = 160kHz, at audible 40dB below the original 10kHz sound.

Now, 160kHz don't fit into 48Khz. It does Nyquist/DC bounce limbo: They'll bounce back from 48kHz to -112 kHz, forth from 0Hz/DC boundary to 74kHz and back again from 48kHz down to 12kHz. (not quite sure if I got the math right here...)

There you go. By not bandlimiting things inbetween two processes, our 10 kHz signal got a non-harmonic aliased signal at 12kHz, with just twice the most common type of harmonic distortion in series.

#---

Now, had each equaliser filtered anything above 24kHz, none of the aliasing in this example would have occurred.

Another thing that goes to show is, oversampling to twice the sample rate does hardly help with the most common harmonic distortion. You have to go 4 x oversampling or 192kHz sampling rate to wipe out the most ordinary aliasing.

(that's because most non-linear processes apply symmetric waveshapers which emphasize odd harmonics. Whereas "tube like" distortion that creates even harmonics such as 2nd is rather rare)

Cheers,

;) Urs

Post Reply

Return to “Effects”