## The truth about bit-depth (and digital audio ‘resolution’)

How to do this, that and the other. Share, learn, teach. How did X do that? How can I sound like Y?
Burillo
KVRAF
3179 posts since 15 Nov, 2006 from Hell
lfm wrote:
Mon Feb 11, 2019 9:45 am
You are thinking like it's about volume
but it is. dynamic range is about volume. or rather, about storing differences between volumes. higher dynamic range means you can encode bigger differences in volume. note: not "finer details", but "bigger differences". 24-bit audio will (in general case) allow you to encode signals that are 144dB apart from each other. 16-bit will allow you to encode two signals that are 96dB apart from each other.

that's what increasing resolution does. it's not giving you "more fine details" between 0 and -96dB - instead, it extends the range to -144dB. meaning, the range between -96dB and 0 is encoded exactly the same way in 16-bit and 24-bit signals. there is no improvement within that range - not audible, nor measurable, nor even mathematically possible.
lfm wrote:
Mon Feb 11, 2019 9:45 am
- I can just go down to 14 bits and hear no difference?
actually, yes, most likely you won't hear any audible difference. you can try bitcrushing your favorite recordings and see for yourself. even classical music, which is a poster child for "muh dynamics", will not have any audible artifacts until you reach 13-14 bits of resolution.
lfm wrote:
Mon Feb 11, 2019 9:45 am
I'm thinking of representing every little nuance of the signal all the way from -8 mil bit steps to +8 mil bit steps that constantly is happening.
there is no nuance to represent there. 24-bit only allows you to "represent" signals between -96dB and -144dB. the -96dB to 0dB range is represented bit-for-bit identical to 16-bit audio. you should try learning basics of digital audio some time, it's very revealing.
lfm wrote:
Mon Feb 11, 2019 9:45 am
So that many feel CD quality sound sterile and pale - and less true than vinyl in some case - is this maybe about this?
the irony is, going higher resolution and higher sampling rates brings you further apart from vinyl sound, not closer to it. vinyl is less fidelity than CD sound (lower dynamic range, lower frequency range), so in what universe would adding fidelity (i.e. extending dynamic and frequency range, which is what adding bit depth and sampling rate do) lead to less fidelity?
lfm wrote:
Mon Feb 11, 2019 9:45 am
And since you clearly can hear the aliasing introduced at 44100 Hz compared to 96k or whatever higher - that is also part of the picture.
you're confusing playback with processing. there is no aliasing on playback.
Last edited by Burillo on Mon Feb 11, 2019 10:46 am, edited 1 time in total.
From Russia with love

Guenon
KVRAF
1537 posts since 17 Jun, 2005
Hehe. Anyone interested in actually learning about these matters, check my advice on the first page. This is, of course, going just like conversations of this nature always go online. Better to keep one's distance. But instead, learn from actual experts and researchers.

The way bit depth works might be counterintuitive, but it (and digital audio altogether) is no exception where one should just let passionate pseudoscientific descriptions guide chosen practices.

While studying from respectable sources, if you are still doubting the actual effect of bit depth on the signal (basically because there's so much pseudo information available on this, and some of it is described in a fashion that might resonate with your intuition, as demonstrated in this thread), it's a good idea to back up your studies with experimentation, of course. So take, say, any pristine 24 bit recording of a nice acoustic instrument. Do a bit depth reduction and make it 16 bit, or heck, even 10 bit. Flip the polarity and mix 1:1 with the original. What ever you hear, then, is the difference between those two signals.

There's no magic, no "true sound of an instrument", no "all the odd and even harmonics on that sound defining that a violin sounds like a violin and an acoustic guitar like acoustic guitar" there to be found in that difference. If you let someone convince yourself there is, and think it's still represented in the higher bit depth version somehow magically, because after all audio perception is a wondrous thing -- then, well...

Ploki
KVRian
539 posts since 17 Dec, 2009
lfm wrote:
Mon Feb 11, 2019 9:45 am
You are looking too much at the peak meters and like if music were dead current.

You are thinking like it's about volume and loudness war or something.
Music is sound, and sound is change in pressure which in turn is change in voltage which in turn is change in value of a digital sample.
Thats. all. there. is. to. it.
You peak at -12 dBFs and you suddenly don't care about two bits?????
- I can just go down to 14 bits and hear no difference?

Signal is constantly jumping up and down between +/- what ever bitdepth you are at - or there would be no music. Dead current is 0 Hz frequency to put it that way, music is alternating current.

I'm thinking of representing every little nuance of the signal all the way from -8 mil bit steps to +8 mil bit steps that constantly is happening.
No, if i peak at -12dbFS i literally don't care about 2 bits because there is literally no converter on the planet that can do more than -123dB which is roughly 21 bits.

What you are thinking is wrong. Because it doesnt work that way at all. You've been explained that buy many people now.
The true sound of an instrument, is fundamental frequency and then all the odd and even harmonics on that sound defining that a violin sounds like a violin and an acoustic guitar like acoustic guitar, or a piano like a piano or anything of that sort.

And also representing these harmonics, each with lesser amplitude than fundamental frequency of the note played - is what makes it sound true to our ears.

And also representing every harmonic in true phase as original.
Human ear cannot hear more than 130dB of difference, because when you do, your ear drums literally implode and you go deaf, and 24bit is plenty for that.
So the complex musical signal for a full mix - and yet another level that is to be truly represented in coming out from speakers when we listen.

So that many feel CD quality sound sterile and pale - and less true than vinyl in some case - is this maybe about this?
That our ears sense much more than theory of Nyquist theorem maybe.
CD is sterile because it is sterile. That's not a bad thing. That means it's a clean playing ground for you to do whatever you want. Vinyl is not sterile, because it adds all sorts of distortions and imperfections which are not intended by the artist.
And if they are, they can be added before being printed to CD.

Why the hell are you dragging Nyquist theorem into the equation? Nyquist theorem deals with SAMPLING FREQUENCY and not Bit depth. Stop changing the playing field.

So AD converters sampling the true event of music played in a room is maybe about something more than than these +/- 32767 levels at 44100 samples/s. And also big difference in DA converters as I told earlier - one is not like the other. Just changing capacitors as I had in that story. I also heard a shootout here at KVR some years ago over AD converters, it was rather interesting as well.

And since you clearly can hear the aliasing introduced at 44100 Hz compared to 96k or whatever higher - that is also part of the picture.
shootouts are stupid for stupid people. These things can be measured scientifically, everything else is human brain with it's cognitive biases playing tricks on you.

You cannot hear aliasing on good converters because it doesn't happen anymore, its not 1995.
Again, stop comparing 44100 to 96000, we are talking about BIT DEPTH, stop moving goal posts.

Anybody spending enough time mixing or listening to music a lot will hear these things.

So this is my game in this thread. So think about it a bit and it will become clear that every step counts.

- you can't hear this

Yeah, it is - and our brain is just beyond incredible in what it does interpreting audio for us to enjoy.

And even more incredible what we can obtain in skills challenging anything like an athlet or virtuoso soloist of some sort.
I spent a lot of time listening and mixing music, i have Focal Twins, i could hear up to 23000Hz when i was young, and I have a turntable at home and buy a few vinyls every year.

Digital sound is SCIENCE. It cannot capture your idea, it can only capture SOUND.

jochicago
KVRist
357 posts since 26 Feb, 2018
I’m loosely following because these threads can get murkier than swamp water, but I wanted to point out another technical detail that keeps getting lost here.

It's not giving you "more fine details" between 0 and -96dB - instead, it extends the range to -144dB
That’s not true. Seems that most people that argue in favor of 16 bits are confused on this one.

Digital files use snapshots in time of the analog wave.
Bit rate: how many samples per second
Bit depth: how much detail in each sample

The digital representation will be an imperfect wave because it is being recreated from a collection of dots. At the higher resolutions, the deviation becomes so small as to perhaps be audibly insignificant (to be argued), but there is a deviation (quantization error).

Here is a simple chart that shows the difference in a visual way:

audiophile-guide-to-quantization-sm.jpg
https://hometheaterhifi.com/technical/t ... e-shaping/

In that chart, the green line is the analog wave. The dots are the digital sampling of the wave in order to be able to reproduce it. The yellow line is the reproduced wave from the dots that were sampled. You'll have a different wave than the original, and how much it deviates depend on the level of detail of the sampling.

Here is another visualization, from the Wikipedia:
wave visual.jpg
https://en.wikipedia.org/wiki/Quantizat ... rocessing)

To make it more obvious we can reduce the bit depth so that it shows the deviation more clearly. Those 2 waves represented in the graphic are not the same shape. As the system has more sampling depth, it can adjust the shape of the wave to better represent the original. If you keep adding bit depth to that chart you'll continue to see the wave change shape as more capacity for detail is introduced.

I hope that helps illustrate the point.
You may argue that, in your opinion, a 16 bit (65k positions) representation of audio waves is enough detail to accurately reproduce a sound. But you cannot argue that there is no difference in the accuracy of the wave reproduction between 16 bit and 24 bit depths. The 24bit is not only going to have less noise, it is also going to have a more faithful version of the wave, because it sampled it with 256X more precision.
You do not have the required permissions to view the files attached to this post.

Guenon
KVRAF
1537 posts since 17 Jun, 2005
jochicago wrote:
Mon Feb 11, 2019 3:00 pm
But you cannot argue that there is no difference in the accuracy of the wave reproduction between 16 bit and 24 bit depths. The 24bit is not only going to have less noise, it is also going to have a more faithful version of the wave
Are you completely sure you know what you are saying here ?

Guenon
KVRAF
1537 posts since 17 Jun, 2005
Maybe I'll answer that myself, heh.

The difference between the resultant waveform and the original signal... is the noise. Ask yourself: if you add noise to a waveform, how does that waveform differ from the original, from sample to sample basis? Observed up close? That's right. You are talking about the noise itself, and there's no additional mystical quality to the difference.

The "simple chart that shows the difference in a visual way" is in the video I linked on the first page btw, and it indeed describes what's going on. The red line is the punchline, if you will: that's the difference between the original wave and the resultant wave, i.e. the noise.

Here's a link to the relevant moment, including the graph you linked (@9:01, if the embed doesn't seek correctly in your browser) :

https://youtu.be/cIQ9IXSUzuM?t=541

jochicago
KVRist
357 posts since 26 Feb, 2018
The difference between the resultant waveform and the original signal... is the noise.
You are saying it yourself. Your position is that the data lost in the quantization (the error) becomes noise. What I've read agrees with you.

From the Wikipedia here's a simple enough explanation:
[...] Quantization is a many-to-few mapping, it is an inherently non-linear and irreversible process. [...] It is impossible, in general, to recover the exact input value when given only the output value.
[...]
The difference between an input value and its quantized value (such as round-off error) is referred to as quantization error.
AFAIK here, in the video, in articles, the data lost (quantization error) is being called noise. Higher bit depth = less noise (less error, less data lost)

The higher number of bits used to represent a sample will result in a better approximation of the audio signal and a reduction in quantization error (noise)
https://www.analog.com/en/education/edu ... range.html

Stop for a moment on that. If the data lost during quantization IS the noise,
then the higher bit depths have less noise/error,
because you are losing less original data in the quantization...
because the detail is higher and thus you are ending up with a more accurate wave.

I don't know about the video. I can't argue against this Youtube guy If you feel he is telling a different story.

I can only speak to simple math and what the articles talk about. The process to digitize a wave is simple enough in basic terms. We are sampling a naturally curvy thing into dots (quantizing). That process demands data loss (see Wikipedia). In the process of creating the dots, and then converting those back into waves, you are definitely approximating and rounding things off, so the final wave cannot be the same as the original wave. That's the quantization error part.

I call the whole thing lesser fidelity. The overarching theme is that during quantization you are losing a portion of the original fine detail, even if arguably insignificant. The higher the bit depth, the less of it you'll lose.

The argument then is: at 16 bits you are losing so little detail as to not be audible. However, the research example I presented earlier argues otherwise. Even untrained people can potentially hear it.

At 24 bits you are sampling with 256x more precision. The data loss there will be significantly less, and have much less of an impact in the quality of your music.

Guenon
KVRAF
1537 posts since 17 Jun, 2005
Jochicago, you are on a respectable path, but at the same time you are just letting your intuition get in the way of learning how it actually works . Everything in how you describe how you feel like digital audio works (going from "a curvy wave" to a digitized one, yes, and how the noise is then also "less fidelity" and "less faithful" instead of simply being... noise) is your intuition talking. That's fine, and fluctuations happening in audio rate signals can be rather unintuitive, and I think that is precisely the main reason causing prolonged debates on matters like this. People actually believe in their own interpretations of this stuff, without making thoroughly sure they understand what's going on.

But again for anyone interested, look for high quality sources instead of trusting some random forum person (including me, even though I've taught this stuff at the university, but hah, I'm still just a random user sporting a C64 game icon on a forum) -- and if in doubt yet really motivated, contact someone in person who does professional research and development on digital audio matters, and pay attention to what they say.
jochicago wrote:
Mon Feb 11, 2019 4:30 pm
The argument then is: at 16 bits you are losing so little detail as to not be audible. However, the research example I presented earlier argues otherwise. Even untrained people can potentially hear it.
Two things bother me in misrepresentations like that, even when unintentional such as ^ this. Still, firstly, you are choosing to read the paper so that it reinforces your intuition. When telling me and others about it, you call noise "losing detail", which is fine, if you understand that it's no different from calling other situations that add noise "losing detail" as well. Secondly, the Queen Mary AES paper in question is a nice read, but it literally doesn't argue that at all. Elaborating on the implications, Mr. Reiss wrote: "Several important practical aspects of high resolution audio perception could neither be confirmed nor denied. Most studies focused on the sample rate, so the ability to discriminate high bit depth, e.g., 24 bit versus 16 bit, remains an open question."

Also keep in mind that the ability to tell a difference between two things doesn't translate to anything more than... being able to tell a difference. For example, if a digital signal has a low enough bit depth for you to be able to make out a slight hiss, that's exactly what it is; as if you hear a tape hiss, and of course you can say "I recognize that as the low bit depth file." Being able to tell a difference doesn't still in any way validate your other intuition based beliefs on the subject.

As I've said, I know how things like this shape up on forums, so I'm trying hard not to get sucked in too much , I think the path is clearly outlined for those who want to make sure they know what they are talking about.
jochicago wrote:
Mon Feb 11, 2019 4:30 pm
I don't know about the video. I can't argue against this Youtube guy If you feel he is telling a different story.
If you want to put it like that, I do feel he's telling a different story, hehe, and if you really want to be serious about this and test your beliefs, arguing (against) his points succinctly would be a good place to start. As in, contact him when you can present actual cold, studied facts on what is actually scientifically wrong in that neat little package of digital audio lecturing I linked. He's Chris Montgomery, btw, the developer of the (Ogg) Vorbis codec and so on.
Last edited by Guenon on Mon Feb 11, 2019 7:05 pm, edited 2 times in total.

sirmonkey
KVRist
102 posts since 26 Sep, 2014
What I appreciate in this thread: Despite strong disagreement, at least the participants seem to be actually reading, and thinking about each others comments and logic.
My 2 or 3 cents: Cent #1. So, 24 bit will giver a finer resolution, and a mathematically closer representation of a recorded sound. Right? Is that agreed upon?
Cent #1.5. The difference between to orginal sound, and the 16bit or 24 bit depth resolution has been defined as "noise". EDIT: Wouldn't most gear produce more noise than is produced by a 16 bit depth?
Cent #2. 16 bit seems like a lot of resolution. But due to improved computing technology, we have 24 bit depth (and higher sample rates). I'm pretty sure that 16 bit is enough resolution that we can not hear a difference between that and 24 bit. Cent# 3. Blind test idea: Have subjects listen to 3 recordings: A vinyl record, a 16bit recording of the record (which would include sounds from dust & scratches, etc.) and then a 24 bit recording. Of course, make sure the volume is the same, and don't let the subject see what is being played. I bet that very, very few could reliably, and repeatedly tell the difference.

Samplecraze
KVRist
328 posts since 9 Aug, 2004

jochicago
KVRist
357 posts since 26 Feb, 2018
Guenon wrote:
Mon Feb 11, 2019 5:14 pm
you are just letting your intuition get in the way of learning how it actually works .
Because you feel strongly that you understand this information, I'd like to ask for clarification:

- Is your position that the waveforms in music at 16 bits are 100% identical to the same waveforms at 24 bits, with 0 deviation of any kind?

And if so,
- what's your view on quantization error?
- How do you reconcile that with technical articles that define quantization as a lossy process, with higher bit depths resulting in less loss?

firstly, you are choosing to read the paper so that it reinforces your intuition.
2 things.
First, in that instance I realized after the fact that I was referencing the wrong example for the situation. The original research was for 44.1/16, not bit depth alone, so bringing that up at the time I did was a bad example as it didn't isolate bit depth. However, I did quote 2 other articles on bit depth, and presented charts. So it is also disingenuous for you to call me out on a mistake in referencing one of my 3 sources, while casually ignoring the other 2. If you feel those articles are wrong (or I have misunderstood them in your estimation), please feel free to refute them.

Second, you got me all wrong there. I don't start from an intuition and seek reinforcement. I'm a true renaissance man. I don't want to "be right", I seek to work with the right information. I have 0 emotional interest in validating my preconceived notions. I'm wrong about as often as everyone else is. The only cure for that is more open-minded research (and dialogue).

I posted the link to the research and articles that I found valuable in my study of this topic. I hope that helps whoever is reading as it has helped me.

Also, I'd like to add that I'm yet to find a technical article that says that there is no data loss in the waveforms at 16 bits (or any other bit depth). I have specifically been looking for something like that to help explain this idea that many have that 16 bits has no data loss. Up until now every article I've read that touches on the matter has been consistent: quantization is lossy, the higher the bit depth the lower the loss, and it remains like that at even high rates. There isn't a bit depth at which you don't lose some data, it just keeps getting more and more insignificant.

Ploki
KVRian
539 posts since 17 Dec, 2009
jochicago wrote:
Tue Feb 12, 2019 1:40 am

- Is your position that the waveforms in music at 16 bits are 100% identical to the same waveforms at 24 bits, with 0 deviation of any kind?

And if so,
- what's your view on quantization error?
- How do you reconcile that with technical articles that define quantization as a lossy process, with higher bit depths resulting in less loss?

firstly, you are choosing to read the paper so that it reinforces your intuition.
2 things.
First, in that instance I realized after the fact that I was referencing the wrong example for the situation. The original research was for 44.1/16, not bit depth alone, so bringing that up at the time I did was a bad example as it didn't isolate bit depth. However, I did quote 2 other articles on bit depth, and presented charts. So it is also disingenuous for you to call me out on a mistake in referencing one of my 3 sources, while casually ignoring the other 2. If you feel those articles are wrong (or I have misunderstood them in your estimation), please feel free to refute them.

Second, you got me all wrong there. I don't start from an intuition and seek reinforcement. I'm a true renaissance man. I don't want to "be right", I seek to work with the right information. I have 0 emotional interest in validating my preconceived notions. I'm wrong about as often as everyone else is. The only cure for that is more open-minded research (and dialogue).

I posted the link to the research and articles that I found valuable in my study of this topic. I hope that helps whoever is reading as it has helped me.

Also, I'd like to add that I'm yet to find a technical article that says that there is no data loss in the waveforms at 16 bits (or any other bit depth). I have specifically been looking for something like that to help explain this idea that many have that 16 bits has no data loss. Up until now every article I've read that touches on the matter has been consistent: quantization is lossy, the higher the bit depth the lower the loss, and it remains like that at even high rates. There isn't a bit depth at which you don't lose some data, it just keeps getting more and more insignificant.

Quantisation error is directly responsible for the noise floor. Noise floor that happens when you convert something to 16bit (if you don't add any) is a direct result of quantisation error.
The resulting analog waveform upon reconstruction compared to a 24bit waveform is identical, except due to quantization error, 16bit has a higher noise floor. The picture about quantisation they you posted proves exactly that.

I will repeat myself here, but bare with me.
You have two signals. One is 16bit, and one is 24bit with added noise at -96dB.
The resulting binary sample values for a random sample will be:
24bits: 1111 1111 0101 1101 0010 1001. (bottom bits are blanked with noise!)
16bits: ____ ____ 0101 1101 0010 1001
edit:
value for those two samples are: 16735529 for 24BIT, and 23849 for 16BIT. As you can see, it's not really intuitive to think of binary samples in decimal values!

Where is the added resolution of 24bits?
(You can try that yourself with a bit meter and some test oscillators in your daw)

The added resolution of 24 bits just allows you to write samples with more precision and therefore achieve a lower noise floor, but when the signal is reconstructed, both 16bit and 24bit samples after FILTERING is applied *snap back to their original analogue value*.

The distance of that snapping back to original value is quantisation error and is what causes quantisation noise!
This is where intuition part happens.
Yes, a sample is written with higher accuracy.
No, it doesn't affect anything else but the noise floor after reconstruction. The result of the higher accuracy is only lower noise floor.
sirmonkey wrote:
Mon Feb 11, 2019 5:47 pm
So, 24 bit will giver a finer resolution, and a mathematically closer representation of a recorded sound. Right? Is that agreed upon?
Cent #1.5. The difference between to orginal sound, and the 16bit or 24 bit depth resolution has been defined as "noise". EDIT: Wouldn't most gear produce more noise than is produced by a 16 bit depth?
Cent #2. 16 bit seems like a lot of resolution. But due to improved computing technology, we have 24 bit depth (and higher sample rates). I'm pretty sure that 16 bit is enough resolution that we can not hear a difference between that and 24 bit. Cent# 3. Blind test idea: Have subjects listen to 3 recordings: A vinyl record, a 16bit recording of the record (which would include sounds from dust & scratches, etc.) and then a 24 bit recording. Of course, make sure the volume is the same, and don't let the subject see what is being played. I bet that very, very few could reliably, and repeatedly tell the difference.
1) yes but it only results in lower noise floor, not better accuracy of the same dynamic range already available at lower bit depths.
1.5) best gear: 21bits, around -123dB, thermally limited by IC designs
2) when you do destructive edits or you record at lower levels, 16bit can be very problematic. for target product with higher dynamic range, 24bit is ideal and doesn't take much more space. 16bit is more than ok for most applications and better than any analog medium
3) that would not work, because vinyls have different masters becasue of the physical limitations of the medium, and they have better or worse RIAA curves (EQs) applied to them before pressing, and inverted upon playback.
This is not digital inverted, so there's bound to be some phase distortion.

There's so little gear that can play beyond -110dB, which is SNR of dithered 16bit...
you only need a high quality ADC with a bit meter to see how little to nothing 24bit final format brings you.

Guenon
KVRAF
1537 posts since 17 Jun, 2005
Just a quick note
jochicago wrote:
Tue Feb 12, 2019 1:40 am
- Is your position that the waveforms in music at 16 bits are 100% identical to the same waveforms at 24 bits, with 0 deviation of any kind?
Kindly, this is still your intuition talking. Lower bit depth equals higher noise floor. This is exactly the unintuitive part that is throwing you off. If there is (extremely quiet, but still) more noise in a signal, how could the waveform be 100% identical? The noise is in the resultant waveform, which is different as a result. But then again, it really is just noise, so the waveform is preserved just as nicely as in any system adding very quiet noise. There's no magic involved that makes the result somehow "less true to the original instrument" in any other way.

In other words, if you have any device in a signal chain that adds (extremely quiet, but still) noise in the signal, the waveform will naturally not be 100% identical, as the values from sample to sample and cycle to cycle are slightly jumping larger and smaller, compared to the original shape of the waveform. For example, if you are recording to tape, tape hiss does this. Again, there is no magic involved. It is what it is.

About your other sources: you are merely interpreting them from a viewpoint that misunderstands their implications, just like the one I did mention.

Burillo
KVRAF
3179 posts since 15 Nov, 2006 from Hell
jochicago wrote:
Tue Feb 12, 2019 1:40 am
- Is your position that the waveforms in music at 16 bits are 100% identical to the same waveforms at 24 bits, with 0 deviation of any kind?

yes, 16-bit and 24-bit sound are bit-for-bit identical up to 16 bits of resolution. the differences only start manifesting whenever you go beyond the 16-bit range.

i have said that already, and i can repeat it however many times is necessary: there is absolutely no difference between 16-bit and 24-bit up to and including the entire 16-bit range. the differences only manifest themselves within the extended range. there is no "more detailed 16-bit" sound. that's not how digital audio works.

how about, instead of using your intuition to understand digital audio, you instead use empirical evidence? how about, for example, you download Stillwell's Bitter, and see for yourself? whenever you play back a file in your DAW, it gets automatically converted into your DAW's native resolution - so, 16-bit file will show up as 16-bit in Bitter, while a 24-bit file will show up as 24-bit, even though your DAW processes sounds likely in 64-bit float. (in fact, most mastered music will show up as 13-bit files...)

you can then bitcrush the 24-bit file, and look at Bitter - you'll see exactly what you expect. you can then reverse the phase of bitcrushed signal, and see the difference between original and bitcrushed signal (tiny amounts of noise).

when you see for yourself that this is indeed the case, i invite you to reflect on what that means. in particular, try asking yourself this: how is it even possible to know how many bits of resolution a track uses, if 24-bit sound is "more detailed" within the same range 16-bit sound is? surely if that was the case, once converted to 64-bit, there wouldn't have been any difference between the two, as far as sample values go - those would have been "scaled" to fit 64-bit range, would they not?
From Russia with love

jochicago
KVRist
357 posts since 26 Feb, 2018
Guenon wrote:
Tue Feb 12, 2019 2:44 am
It really is just noise, so the waveform is preserved just as nicely as in any system adding very quiet noise. There's no magic involved that makes the result somehow "less true to the original instrument" in any other way.
Thank you for clarifying your thoughts. I'm now convinced you and me are talking about the same thing, just from different angles. You are saying the waveform is "the same" albeit hampered by a tiny bit of noise.

I'm saying that the noise is the manifestation of the nicks and scratches that you put on that waveform when you sampled it at 16 bits, and you would have less of that noise in a 24 bit system (but still some manner of faint noise). I believe you will agree with that.

To you that means that it is the same waveform, just slightly noisier. To me that means that it is the same waveform, just slightly less accurate because it is constrained to the 16 bits space. Same thing, case closed.

Burillo wrote:
Tue Feb 12, 2019 2:50 am
once converted to 64-bit, there wouldn't have been any difference between the two, as far as sample values go - those would have been "scaled" to fit 64-bit range, would they not?
I think this is another point that brings confusion. When we upsample to 64 bits for mixing we are basically just putting the same data in a larger container. It's like putting a cup of water in a gallon bucket. We still only have the equivalent of a cup of water.

If your original file starts at 16 bits, then you bring it into the DAW and it converts to 64 bit mixing, you don't have a 64 bit wave file, you have a cup of 16 bits data in a 64 bit bucket. Whatever noise was there at 16 bits carries over, whatever lack of detail that was already baked into the file.

However, there is a difference past the initial conversion. The data at first is the same as the 16 bits file, but once you start applying effects like eq, compression, harmonics, those calculations are being applied with the freedom of detail of the larger "bucket" and start taking advantage of the additional room to register data. Calculations are not stuck inside 16 bits, they are free to end up with long numeric tails and swim openly in that bucket. Ultimately the quality and accuracy of the effects will sound better if you apply them inside the 64 bits bucket, that's why the DAWs work internally in 64, to allow for a very high level of internal precision when editing the files, until the audio is ready to be rendered for human hearing (because we sure as anything cannot hear the level of detail of 64 bits).

From the beginning of the chain, if you can record at 24 bits you'll end up with a recording that has more detail/less error or noise than if you had used a 16 bits system instead. When you bring 24bit recording into the DAW you now have wave file that is more faithful to the original sound, less noise. Any effects you apply there after are going to get a chance to operate on a more faithful version of the waveform, having to deal with less starting noise.

At the end, when you export at any bit depth you are making a choice on how much detail to preserve / how much quantization error to endure, which is what I call the "nicks and scratches" that we put on a high res wave when it is being stuffed into a limited container like 16 bits (or any other bit depth, they all have a limit, they all introduce their own amount of noise and error).

So, one more run for simplicity sake:
- Record at 24 bits, you have slightly less noise and slightly more precise waveforms.
- Edit at 64 bits (in your DAW) and you'll have essentially infinite space to make any adjustments without sacrificing quality (not introducing any more nicks and scratches while mixing)
- When it is time to export, you are going to nick and scratch the waveforms again. It's your choice by how much.