Is realtime oversampling a dirty/destructive process?

VST, AU, AAX, CLAP, etc. Plugin Virtual Effects Discussion
Post Reply New Topic
RELATED
PRODUCTS

Post

Fred_Abstract wrote:but it s the same in synth , some synth can oversample the oscillator ect a crazy amount of time .. in the end it s still not what make a oscillator sound more analog, or just slighty maybe.
Of course oversampling would not turn a bad algo into something that sounds good. Synths are hard to debate as the tones they produce are not natural, so aliasing might work as a part of the timbre (although I'm sure dramatic AM-radio tuner style aliasing would still be detrimental), for example there are lots of non-harmonical oscillations comprising the sound of piano or bells, and yet they are still accepted as musical instrumetns.

Post

meloco_go wrote: for example there are lots of non-harmonical oscillations comprising the sound of piano or bells, and yet they are still accepted as musical instrumetns.
agreed...
and the amount of these non harmonical oscillations is what usually makes difference between average and exceptional musical instrument...

Post

meloco_go wrote:A signal 1dB above or 1dB below masking threshold might makes a difference between hearing and not-hearing.
What are you talking about?

I was talking about signals 100dB BELOW another signal.
I was talking about the maximum build up, which would bring it to 70dB below music (while 90dB would be more realistic, because the music will build up as well).

The "masking treshhold" is definetly above those levels.
Watch the video or do some blind tests.
meloco_go wrote:The point is not about being "audible noise" it is about noise that alters responce. Would you argue that dither is not needed?
In most cases, where it's done, it's indeed not needed. But since it's technically the better solution, there's no reason to not do it.

I'm not sure, what you mean with "alters response".
Yes, dither not only prevents truncation noise but also preserves very quiet signal parts.
But both things don't matter, if they are 70dB below everything else.
meloco_go wrote:Also, those -70dB you say is plain lie beause you are dealing with spectrum average. When you talk about instantaneous error it might be much, much higer.
Do a simple test, get a transient rich mix and compare how something like C1 works at 44.1 and 96. You can upsample 44.1 to 96 and do a null test. It's much higher than -70dB.
1. You should stop mixing things up.
We were talking about truncation noise and dithering, which has nothing to do with the sample rate.
2. You should stop talking about things, you don't understand.
3. You should stop insulting me by calling me a liar.
4. Your statement makes no sense.

Post

Nokenoku wrote:I was talking about signals 100dB BELOW another signal.
Mix a 30Hz sine wave and 3kHz. Let the 3kHz be below 30Hz by 70dB (why are you keep changing 70dB for 100dB I don't know), would you be able to hear that?
And before you say it is not relevant, it is exactly relevant because I once encountered a resample problem in Windows which manifested itself that way. I found that I heard strange bird chirp sounds during the soft bass intro.
The "masking treshhold" is definetly above those levels.
Depends on many factors.

I'm not sure, what you mean with "alters response".
Yes, dither not only prevents truncation noise but also preserves very quiet signal parts.
It not only preserves quiet parts, it helps spatial clues to come through, and may preserve tone of the instruments in some cases.

1. You should stop mixing things up.
We were talking about truncation noise and dithering, which has nothing to do with the sample rate.
The thread is about oversampling and it's effects. It is you who mixes things up. If you bring up truncation and dithering you should point out that clearly.
2. You should stop talking about things, you don't understand.
You should stop thinking everybody doesn't understand what you are saying when people disagree with you.
3. You should stop insulting me by calling me a liar.
I didn't call _you_ liar. I called a certain fact lie, which it is in relation to the subject at hand.
4. Your statement makes no sense.
To you.
Not a problem.

Post

Nokenoku wrote:
bmanic wrote:
Nokenoku wrote:
bmanic wrote:I can't quite put my finger on it but it's a similar constant annoyance like when a client delivers some song for mastering and he/she hasn't used any dithering when it's required but instead has truncated everything. It's a nasty harshness, usually manifesting itself from about 2kHz onward. Especially noticeable once you start hitting the final mastering limiter/ADDA loop hard.
Have you ever verified that impression in a blindtest?
Many many times. I do ABX tests all the time. A shootout between 3x tracks of The Glue set to 64x oversampling versus the same 3 tracks set to no oversampling is very easy to pick out.
The question was on dithering, not on oversampling and specific plugins.
Ah sorry, I misunderstood. To answer your original question: Hell no! I can't hear any real differences between the various dithers. I do however quite easily hear the difference when several tracks have been truncated instead of dithered.. but this can only happen on purpose. I don't know any current DAW where you don't get some kind of dithering when sending stuff from a 64bit float system out to the analogue chain and back again.

I did the truncation experiment in wavelab, on purpose.

Can I hear the difference of truncated from 24 bit to 16 bit on a modern pop song at loud levels? It depends.. if the track is sparse, like a minimalist electro beat or hiphop beat, then yes. You can hear the "grain" in between the transients. Can I hear the difference on full-out death metal / dense pop production? Nope. No chance.

Personally I don't really understand the huge dither wars of yesteryear. I don't find Pow-r dither or megabit max to be any more superior than typical tpdf dither on a typical modern production. The only benefits happen at fadeouts or perhaps on classical music/high dynamic range stuff.
Nokenoku wrote:
bmanic wrote:
Nokenoku wrote: 1. Loudness of unrelated signals won't add up indefinetly.
2. The loudness of the actual audio signals (100dB louder than your "nasty stuff") will add up as well. So the relative loudness of your errors stays about the same.
.. yes, that's what I thought at first too but it seems that inharmonic stuff doesn't scale together with harmonic stuff the same way. Think about an out of tune choir. If 3 of the 100 people who sing are badly out of tune, even if they sing quietly compared to the others, something will sound off and the endproduct feels "messy". It will lack tightness and coherence.
I just made the test for you:

The loudness of quantization noise (which sounds similar to just white noise) will add about 30dB in level, if you sum 50 tracks.

So in your example it will rise from -100dBFS to -70dBFS, which would still be inaudible, even if your actual audio signal would not add up at all.

This whole "But digital artefacts will add up, and with xx tracks will become very audible." stuff is a very old and very wrong myth.
This always pops up in threads about 16bit vs. 24bit vs. 32bit vs. 64bit.
In this case I was specifically talking about distortion. Not quantization noise. Aliasing and intermodulation distortion is NOT noise. They are very clear and distinct ringing frequencies most of the time.. well at least on steady state tones. On "noise" transients they sound more like a noise burst but somehow related in a bad way to the basic pitch. That is, they track the harmonics but they are completely "off" and outside the fundamental tones.

Cheers!
bManic
"Wisdom is wisdom, regardless of the idiot who said it." -an idiot

Post

if the track is sparse, like a minimalist electro beat or hiphop beat, then yes. You can hear the "grain" in between the transients.
can we hear?
DOLPH WILL PWNZ0R J00r LAWZ!!!!

Post

Nokenoku wrote: Watch this:
http://www.youtube.com/watch?v=BYTlN6wjcvQ
This video has some merits but only some. Ethan has a very clear agenda in everything he says and does. I wouldn't take his word as gospel.

In the end it comes down to how much a person decides to weigh importance on signal purity and coherence. Does it change the actual music? Does it make the music better? Of course not. Is there anything to loose by striving for better sound quality? Rarely.

Anyways, I do agree with most of what the video has to offer but in this thread I thought we were talking about oversampling and it's pitfalls and merits.

When it comes to dynamics processors I still stand firmly on the notion that we need oversampling in some form. When rendering a project I'd love to set all these plugins at a seemingly ridiculous oversampling amount and leave the computer to render for an hour. In my experience it does improve the final product noticeably.

The main issue for me, like I've said before, is the "ear fatigue" and annoyance factor that happens when many signals with even small amounts of aliasing or any kind of inharmonic distortion gets mixed together. They do not sound like noise. Random noise (white, pink, brown, tapehiss, dither applied too many times etc.) is fine and I very rarely feel like noise is bothering a production but I do get annoyed by distortions of most kind that aren't somehow tied in to the harmonic structure of the instruments.

Cheers!
bManic
"Wisdom is wisdom, regardless of the idiot who said it." -an idiot

Post

tony tony chopper wrote:
if the track is sparse, like a minimalist electro beat or hiphop beat, then yes. You can hear the "grain" in between the transients.
can we hear?
I don't know. Create a 30 track production and truncate the files? :)

Then use one of the free ABX test programs and do a 100 take run with fresh ears.

I'll see if I can get the permission to post some of the mixes I have in the studio. Actually, it might be quite easy and quick to create a quick mockup in the studio. 7-12 tracks of a few drums. Some bass stuff, a pad at low levels in the background. Some leads?

If I have the time I'll see if I can create one. Would be good for testing other things as well.

Perhaps the more important question is: Are the artifacts annoying enough? And what happens when the track gets mastered, especially if it's a dark track that needs some high-shelf boosts in the end? I don't know the answer to this either. Can it be heard? Yes, I think so.

I do have some of the old test tracks at home but they are stupid sine wave based things which is not fair of course. I'm not aware of any modern pop production which is a song only made of pure tones. :)

Cheers!
bManic
"Wisdom is wisdom, regardless of the idiot who said it." -an idiot

Post

Are the artifacts annoying enough?
I'd say they won't be any audible
And what happens when the track gets mastered
but it's not a 16bit track that's going to be mastered anyway
(I also don't understand why you're talking about mixing 16bit tracks, even just for the reason that you may want to saturate them hardly, you'd better keep the original precisions. Is that only a problem of HD space? The price per GB is nearly zero these days)
DOLPH WILL PWNZ0R J00r LAWZ!!!!

Post

tony tony chopper wrote:
Are the artifacts annoying enough?
I'd say they won't be any audible
And what happens when the track gets mastered
but it's not a 16bit track that's going to be mastered anyway
(I also don't understand why you're talking about mixing 16bit tracks, even just for the reason that you may want to saturate them hardly, you'd better keep the original precisions. Is that only a problem of HD space? The price per GB is nearly zero these days)
This thread is getting confusing. The talk about dithering started only from my own hypothetical example of truncation noise being a similar "annoyance" as aliasing and intermodulation distortion.

Nobody mixes truncated tracks! Of course not. I don't even think it's possible to do by accident because most DAWs work at floating point resolutions and most likely dither down to the appropriate levels before sending it to the DA converter.

My main interest is the benefit of "ridiculously" high levels of oversampling versus low levels of oversampling when it comes to dynamics processors.

Cheers!
bManic
"Wisdom is wisdom, regardless of the idiot who said it." -an idiot

Post

The talk about dithering started only from my own hypothetical example of truncation noise being a similar "annoyance" as aliasing and intermodulation distortion.
16bit truncation noise is really really nowhere near the same level as typical aliasing (like in a synth or a strong saturator). It's inaudible vs very audible.
DOLPH WILL PWNZ0R J00r LAWZ!!!!

Post

meloco_go wrote:
Nokenoku wrote:I was talking about signals 100dB BELOW another signal.
Mix a 30Hz sine wave and 3kHz. Let the 3kHz be below 30Hz by 70dB (why are you keep changing 70dB for 100dB I don't know), would you be able to hear that?
Hey, no no ... let's make an even more "realistic" example ...
Let the first sine wave be 0Hz. :roll:

We were talking about musical signals and noise.
Not two sine waves (with one being below the cutoff frequency of most listening systems).
meloco_go wrote:The thread is about oversampling and it's effects. It is you who mixes things up. If you bring up truncation and dithering you should point out that clearly.
I did not bring up something. I answered a claim of someone else.
Still it was absolutely clear, what we were talking about the whole time.

That's why you should actually READ, what other people talk about, before getting into the discussion.
bmanic wrote:In this case I was specifically talking about distortion. Not quantization noise. Aliasing and intermodulation distortion is NOT noise. They are very clear and distinct ringing frequencies most of the time.. well at least on steady state tones. On "noise" transients they sound more like a noise burst but somehow related in a bad way to the basic pitch. That is, they track the harmonics but they are completely "off" and outside the fundamental tones.
Ok.
Still I don't think you can ever hear such artefacts buried 70dB down in a mix. Especially not with multiple tracks.
This has yet to be proven.

Post

tony tony chopper wrote:
The talk about dithering started only from my own hypothetical example of truncation noise being a similar "annoyance" as aliasing and intermodulation distortion.
16bit truncation noise is really really nowhere near the same level as typical aliasing (like in a synth or a strong saturator). It's inaudible vs very audible.
You are probably right. I do remember some projects that were rendered in good old Logic Audio 4, back in the day, where final output dithering had been forgotten when bouncing tracks to another DAW and the end result sounded harsh. After enabling the dithering and doing the work all over again we all agreed that it sounded less annoying and smoother.

However, I am fully aware that this could have been placebo. Especially since we were a whole bunch of people there comparing the files. Group placebo is even more common. :lol:

Cheers!
bManic
"Wisdom is wisdom, regardless of the idiot who said it." -an idiot

Post

Nokenoku wrote:Hey, no no ... let's make an even more "realistic" example ...
Let the first sine wave be 0Hz. :roll:

We were talking about musical signals and noise.
Not two sine waves (with one being below the cutoff frequency of most listening systems).
I said that I encountered same situation with a softly played bass. Anyway, there's pretty much no masking if the signals are from widely separated ERBs. So you can pick 100Hz if you wish.

I did not bring up something. I answered a claim of someone else.
Still it was absolutely clear, what we were talking about the whole time.
Apparently not, just read above.
That's why you should actually READ, what other people talk about, before getting into the discussion.
It's you who started a debate on something which is not exactly on topic.
Still I don't think you can ever hear such artefacts buried 70dB down in a mix. Especially not with multiple tracks.
This has yet to be proven.
Depends on music and type of the artifact.

Post

Nokenoku wrote: Still I don't think you can ever hear such artefacts buried 70dB down in a mix. Especially not with multiple tracks.
This has yet to be proven.
Well if it's 70dB down at mixdown and I bring it to mastering. The peaks get squashed another 10dB easily meaning the aliasing is now at -60dBFS. I'd say that would be a pretty nasty scenario.

Perhaps it will not be possible to hear on laptop speakers but any decent quality hi-fi system should be enough to detect a clear difference between such a badly aliasing mix and one where the noises have been kept down below -100dBFS.

It does however depend a lot on the type of music. If the music is already severely distorted (industrial metal, death metal or whatever) then of course it will be harder to "hear" but the ear fatigue and annoyance is still there, perhaps.

If you have a very smooth mix with soft music then you can be absolutely certain that any kind of moderate to severe inharmonic distortion will be heard quite easily at -70dBFS. My guess is that the more in-tune/harmonically coherent the song is the easier it is to get annoyed by the artifacts.. this is pure speculation on my part though. The key here is the description of the type of distortion we are discussing here: inharmonic. Harmonic distortion of any kind should basically "blend in" and thus not be heard at all as clearly.

Cheers!
bManic
"Wisdom is wisdom, regardless of the idiot who said it." -an idiot

Post Reply

Return to “Effects”