Using normalise.

How to do this, that and the other. Share, learn, teach. How did X do that? How can I sound like Y?
RELATED
PRODUCTS

Post

If you're worried about maintaining the most resolution, why not (instead of normalizing your stems on export) just export 32 bit files? That way you have the benefit of keeping your relative balances intact.

TBH i'm always skeptical about exporting 32 bit, as even the difference between 24 and 16 bit is hard enough to hear, let alone 32 bit.

A few people have mentioned valid uses of normalization, but i don't think it has much place in production because each sound in a mix needs a different level, so choosing some arbitrary starting level seems like an unnecessary extra step.

Post

nix808 wrote: Fri Feb 23, 2024 6:19 am I conjecture that it's not destructive,
it does not take any fidelity away
(...what information is lost?)
What is lost is the original information. A "destructive edit" is any edit that permanently changes the original source audio, as opposed to real-time processing. If you do something to the audio and then overwrite the original file with the altered file, that is a destructive edit, regardless of the subjective quality of the change.

It's a good idea not to do that.
THIS MUSIC HAS BEEN MIXED TO BE PLAYED LOUD SO TURN IT UP

Post

jamcat wrote: Fri Feb 23, 2024 12:13 amYou have to watch out for ISPs (inter-sample peaks) though, as these are not detected by standard normalising algorithms. This is where the actual samples are still technically below 0dB, but there is actually a hidden peak between them that will clip the reconstruction filter. This is because samples are actually "connected" by curves rather than straight lines. Because everything is sine waves.
You do understand that's not a real thing, right? Your audio editor just draws those curves, they don't actually exist in the audio file. They are just a convenient visual representation that's easier to take in than a bunch of dots.
nix808 wrote: Fri Feb 23, 2024 5:16 amNormalizing allows high fidelity by maintaining bit-depth
Not really. You can't create bit depth so if your signal is very quiet, normalising it will add a lot of interpolated data that may or may not be 100% accurate.
...and of course, the last thing that happens to a master is a normalize
Second last. The last thing will be to down-convert your working bit depth to your final output bit depth.

Honestly I don't know why anyone would ask a question here. This is one of the most basic, simple questions and almost nobody has given a useful answer. The display of ignorance around such a basic subject is breath-taking and more than a little bit scary.

The only useful answer is that you should do it if you are going to downsample to a lower bit depth, as it will preserve more of the original signal. So if you have a 24 or 32 bit file you are going to convert to 16 bit, it's always a good idea to normalise first. Beyond that, it probably won't matter one way or the other, except in very specific, extreme circumstances. I tend to do it a lot, mostly as it makes it easier to see the waveform, which saves me having to preview the audio as often. It also makes it easy to swap one sample for another without having to worry about one being louder than the other. It saves me a bit of time when I am working but it doesn't affect the quality of my rendered files at all.
NOVAkILL : Asus RoG Flow Z13, Core i9, 16GB RAM, Win11 | EVO 16 | Studio One | bx_oberhausen, GR-8, JP6K, Union, Hexeract, Olga, TRK-01, SEM, BA-1, Thorn, Prestige, Spire, Legend-HZ, ANA-2, VG Iron 2 | Uno Pro, Rocket.

Post

BONES wrote: Fri Feb 23, 2024 8:33 am Honestly I don't know why anyone would ask a question here. This is one of the most basic, simple questions and almost nobody has given a useful answer. The display of ignorance around such a basic subject is breath-taking and more than a little bit scary.

The only useful answer is that you should do it if you are going to downsample to a lower bit depth, as it will preserve more of the original signal. So if you have a 24 or 32 bit file you are going to convert to 16 bit, it's always a good idea to normalise first.
Why would you bit-reduce stems you exported or files you imported? Makes no sense. That's why no-one mentioned it. Straw-man criticism.

Post

BONES wrote: Fri Feb 23, 2024 8:33 am
jamcat wrote: Fri Feb 23, 2024 12:13 amYou have to watch out for ISPs (inter-sample peaks) though, as these are not detected by standard normalising algorithms. This is where the actual samples are still technically below 0dB, but there is actually a hidden peak between them that will clip the reconstruction filter. This is because samples are actually "connected" by curves rather than straight lines. Because everything is sine waves.
You do understand that's not a real thing, right? Your audio editor just draws those curves, they don't actually exist in the audio file. They are just a convenient visual representation that's easier to take in than a bunch of dots.
No, that’s not correct. Samples are taken at regular intervals, and there is a good chance that it won’t happen at exactly the local maxima. Instead, the 2 closest samples to the peak may be on the left and right slopes.

Do you really think the peak is just truncated? Because it is not. The reconstruction filter, also known as the “smoothing filter” does just what the name implies. It creates a smooth continuous analog signal. So when the sound is reproduced, it is not a series of discrete dots, or even stepped stairs, but instead a smooth continuous signal that follows sine curves.

This is why different samples rates will sound identical so long as the frequencies being reproduced are below Nyquist. If the D/A conversion didn’t result in a smooth curve, there would be a difference in sound quality due to the larger ‘gaps’ at lower sample rates.

And not all audio editors draw the curves. Most that I’ve seen just okay connect the dots and create trapezoidal waveforms. Ones like Acon Digital Acoustica that show curves do so to represent what the reconstructed sound wave will be doing. It’s not done for convenience, and actually takes more work to code.
Last edited by jamcat on Fri Feb 23, 2024 9:30 am, edited 1 time in total.
THIS MUSIC HAS BEEN MIXED TO BE PLAYED LOUD SO TURN IT UP

Post

BONES wrote: Fri Feb 23, 2024 8:33 am
jamcat wrote: Fri Feb 23, 2024 12:13 amYou have to watch out for ISPs (inter-sample peaks) though, as these are not detected by standard normalising algorithms. This is where the actual samples are still technically below 0dB, but there is actually a hidden peak between them that will clip the reconstruction filter. This is because samples are actually "connected" by curves rather than straight lines. Because everything is sine waves.
You do understand that's not a real thing, right? Your audio editor just draws those curves, they don't actually exist in the audio file. They are just a convenient visual representation that's easier to take in than a bunch of dots.
He's not talking about the files, or the visual representation of the files in the DAW. He's talking about the resultant audio signal after digital to analog conversion. That's why he mentioned the reconstruction filter.

https://www.earlevel.com/main/2023/01/1 ... ple-peaks/
https://techblog.izotope.com/2015/08/24 ... detection/
my other modular synth is a bugbrand

Post Reply

Return to “Production Techniques”