44.1 kHz or 48 kHz?

If you are new here check this forum first, your question may have been answered.
Post Reply New Topic
RELATED
PRODUCTS

Post

My DAW and external sound card is set at 44.1 kHz, I've used it for a while at that setting and it sounds good to me. I could set it at 96 kHz or even 192 kHz if I wanted to, apparently it doesn't necessarily mean it sounds better though

Anyway, I read that it was recommended to record at 48 kHz instead of 44.1 kHz. Now, I don't mix and master myself, and does it matter when the tracks are already recorded as in turned into sound files?

Post

Not a big deal, but if you can work entirely in 96khz the audio will have more headroom as a safe measure, and you can have lower latency. So if your computer can handle it, why not?

Post

The benefits of 48 over 41 are outweighed by the modest performance hit on your CPU. This of course depends on what you’re doing and using. But generally, there’s no reason not to go with 48.

If you want an extensive primer on the subject, look for Dan Worrall’s video on YouTube.

Post

roman.i wrote: Mon Apr 10, 2023 8:21 pm Not a big deal, but if you can work entirely in 96khz the audio will have more headroom as a safe measure, and you can have lower latency. So if your computer can handle it, why not?
If I understood it right, you will not get higher headroom for this, for that you need to increase your bit depth (e.g. from 16 to 24) and not your sample rate.

Lower latency: Maybe, maybe not so much. If you increase the sample rate, you get less latency but if your soundcard / computer cannot handle the latency, you need to increase the block size again. So if you have 48000 and 128 buffer and go to to 96000 and 256, it's same latency again. If you computer can handle 96000 and 128, then why should it not also handle 48000 and 64? The latter also takes less CPU and therefore your machine is less likely to be overloaded and cause dropouts.
If your machine can handle it very easily anyway, a higher bitrate might have still slight advantages - but not so sure how much, really.

In the end, I'd base the decision on what sample rate you want to provide in the end format. For videos it's typically 48000 and for CD music, its 44.100. You can't hear the difference between them, but if you convert one to the other you might get some conversion issues?
Find my (music) related software projects here: github.com/Fannon

Post

Some audio interfaces recordings sound better when run at higher sample rate. I did some tests on my clarett 2pre, recording vocals at various sample rates, and 48kHz was sounding better than 44kHz and similar to higher sample rate, like if 48kHz was the sweet spot.
I don’t know about other interfaces (I use Motu now, but never tested again) but it seems plausible that the reconstruction filter inside the interface sounds better at certain sample rates.
Regarding mixing, some plugins could sound better at high SR but I personnally prefer to upsample critical plugins (built in or using meta plugin).
I would be curious for more insight about sample rate and converters because my experience is not straight scientific testing...

Post

Calagan wrote: Tue Apr 11, 2023 6:33 am Regarding mixing, some plugins could sound better at high SR but I personnally prefer to upsample critical plugins (built in or using meta plugin).
Normally the oversampling logic of plugins that have to run realtime have to compromise on CPU Usage, latency or giving up linear-phase.

If you have chains of oversampling effects then you are running N non-ideal up sample and down sample steps. Not the end of the world, but not optimal either.

Ideally you could run the whole chain of FX oversampled. I know this is possible on Reaper. I don't know other DAWs.

Post

I tried 44, 48, 96 and couldn't hear the difference. Since ssd costs money I went with 44.1 but also because I doubt that in my amateur production this would break my projects.

Post

Fannon wrote: Tue Apr 11, 2023 5:47 am
If I understood it right, you will not get higher headroom for this, for that you need to increase your bit depth (e.g. from 16 to 24) and not your sample rate.

Lower latency: Maybe, maybe not so much. If you increase the sample rate, you get less latency but if your soundcard / computer cannot handle the latency, you need to increase the block size again. So if you have 48000 and 128 buffer and go to to 96000 and 256, it's same latency again. If you computer can handle 96000 and 128, then why should it not also handle 48000 and 64? The latter also takes less CPU and therefore your machine is less likely to be overloaded and cause dropouts.
If your machine can handle it very easily anyway, a higher bitrate might have still slight advantages - but not so sure how much, really.

In the end, I'd base the decision on what sample rate you want to provide in the end format. For videos it's typically 48000 and for CD music, its 44.100. You can't hear the difference between them, but if you convert one to the other you might get some conversion issues?
My mistake, not "headroom", but the frequency range.
The problem with 44.1khz is that it can produce aliasing.
With 96khz this is not a problem anymore, since the frequency range is way beyond the hearing range, so if the aliasing happens you won't hear it.
Modern audio interfaces and well written audio software filters aliasing, but as I said this just a safety measure, not a strong requirement.

You can't and won't hear any difference between 44.1khz and 48khz in the end result.
But the topic is not about end result.
For the similar reasons we use 32bit, not 16bit in daw, it's preferred to use higher sample rate during the processing for various reasons.
The end result can be reduced to 44.1khz 16bit without any effect on the sound.

Post

4kHz (so actually 2kHz added frequency range after digital reconstruction) is not much. It means that the antialiasing filter has to be, essentially, slightly "weaker" or start 2kHz later (but we're already talking about heavy brick wall filtering, so it doesn't really make much of a difference on the impact on the sound, if audible). Furthermore, whatever aliasing may be present due to the slightly lower sample rate at 44.1kHz will hardly be significant.

I personally believe that it makes *very* little difference, and the only way to prevent aliasing is to work at higher (sometimes much higher) samplerates. For live applications, 96 or 192kHz may be preferable, provided you have a sufficiently fast system (and it will come with lower latency too). For studio applications, you might as well work at 44.1kHz (or 48kHz) and oversample when needed, in the right amount (a compressor may benefit from just 2x or 4x oversampling, whereas a clipper or a distortion effect may require 8x or 16x; for mastering purposes, maybe even higher!).

Or... just don't give a shit. Aliasing is very hard to hear and often there will be a veeeery tiny difference you can only hear on very high fidelity monitors (slightly more "open" sound). Wherever you incur aliasing, you will also incur intermodulation distortion because you are *not* working with sinewaves, but essentially program material (on the bus) or harmonically rich tracks (the single channels). And that is going to mask any aliasing you may get. Where does aliasing rear its head the most? On synths. Relatively "pure" sounds, often quite consistent harmonically-wise (think of a synth lead) where you will probably hear some strange "added modulation" happening up in the treble during pitch bends or pitch modulation, or weird-sounding high octave notes (although competent synths will use BLEP oscillators or something similar which should prevent aliasing from arising at least on the "pure" tones).

Post

I never hear aliasing (or only when it's truly obvious) because most plugins are optimized to sound fine.
But I definitely hear differences between 44kHz and X4 or X8 upsampling. It's not always better, it's often just different (usually a bit cleaner when saturation is involved - and that is not always the best choice).
I can hear it while using the built-in upsampling in Saturn, Tupe, TrackComp or Mu, and I can hear it when upsampling Devil Loc or Decapitator with Metaplugin. It's not night and day, but you can hear it quite easily.
With clippers and limiters, I think it's clearly always better because what you expect from a limiter/clipper is usually to be transparent.

By the way, Dan Worral did an interesting video about oversampling, and as far as I can judge technically I'm quite agree with him on the subject : it's better to upsample critical plugins than the whole project, because if you run the whole session at 96kHz without filtering the highs (past 22kHz) after each processing, you've got some nasty stuff accumulating that you need to filter at the end anyway when delivering at 44kHz/48kHz, and you are loosing a lot of CPU power using many plugins that don't need oversampling...
Upsampling each plugin gives you the benefit of upsampling and the filtering to get rid of the supersonic trash.

In my use case, the limiting factor is the "sound" of Metaplugin. I think it has a sound because TrackComp upsampled internally and upsampled with Metaplugin doesn't sound the same (and I prefer the internal upsampling). I would be happy to have improved filter with Metaplugin, or more options, but it still usually makes what I ask it.

Anyway, I think the initial question of YnJ was about using higher sample rate while recording.
My choice now is using always 48kHz because the added disk space and CPU use is minimal and I found it sounded a bit better when I did a test (one time, on a specific hardware). I think it can make sense that some audio interface work better at some sample rate depending on their reconstruction filter (because if I'm not mistaken, any audio interface has some kind of internal upsampling and filtering when it's converting from analog>digital and digital>analog).
It's not very scientific but I feel fine about it.
Of course, I would be very pleased to have some more scientific and technical insight about it.

Post

What is the sample rate of your published end product?

Then use that from the start.

Rule of thumb #1: avoid any conversions if you can.
Rule of thumb #2: stop worrying about futile things.
We are the KVR collective. Resistance is futile. You will be assimilated. Image
My MusicCalc is served over https!!

Post

roman.i wrote: Tue Apr 11, 2023 3:16 pm The problem with 44.1khz is that it can produce aliasing.
This is what I read and why it was recommended to me to use 48 kHz when recording, to make room to filter it out as far as I understood. And the convert it back to 44.1 kHz later
With 96khz this is not a problem anymore, since the frequency range is way beyond the hearing range, so if the aliasing happens you won't hear it.
Modern audio interfaces and well written audio software filters aliasing, but as I said this just a safety measure, not a strong requirement.
I was adviced not to use 96 kHz as some plug-ins might actually sound worse, I don't know which or how though

Post

Calagan wrote: Wed Apr 12, 2023 4:25 am Dan Worral did an interesting video about oversampling, and as far as I can judge technically I'm quite agree with him on the subject : it's better to upsample critical plugins than the whole project, because if you run the whole session at 96kHz without filtering the highs (past 22kHz) after each processing, you've got some nasty stuff accumulating that you need to filter at the end anyway when delivering at 44kHz/48kHz, and you are loosing a lot of CPU power using many plugins that don't need oversampling...
Upsampling each plugin gives you the benefit of upsampling and the filtering to get rid of the supersonic trash.
What you write here makes a lot of sence, and seems to answer my some of my questions. How do you filter away the nasty stuff? Does recording at higher kHz just make it go away as in not hearing it, or do you need a spesific tool?

Post

When I tested plugins at 44.1khz vs 96khz, always 96khz had a detailed and cleaner sound.
Maybe some people just like how aliasing sound because they are used to it over the years.

Post

Oversampling is not a completely clean process though. If the downsampling filter is linear phase, you are introducing some pre-ringing in the passband as well as in the stopband (at least in the upper treble). If it's minimum phase, then you are introducing phase shift in the upper treble. They are dual in nature, linear phase means time shift (pre-ringing), minimum phase means phase shift; the former preserves the waveform, the latter preserves causality. Does it matter? We can't really hear either phase shift or pre-ringing in the upper treble, so it should be inconsequential. But there are trade-offs, nothing is free. The former can be mixed with the dry signal externally but introduces latency, the latter can present phase issues between multiple track of the same source (a drum kit for example) but it's latency-free.

Post Reply

Return to “Getting Started (AKA What is the best...?)”