Why EQ a sound doesn't change timbre?

Chords, scales, harmony, melody, etc.
Post Reply New Topic
RELATED
PRODUCTS

Post

A 'static' EQ is perhaps more analogous to changing the room tone rather than the instrument tone. Your room always responds to 'frequency input' in exactly the same way, whereas the spectrum of your instrument can radically change depending on what you're doing to it, and of course your instrument's spectrum also changes over time.

Get your EQ bands moving on the other hand, and you can alter timbre radically. The filter in your synth is essentially an EQ at the end of the day, moreover it's (usually) a polyphonic EQ. Set up your synth with a band reject filter + full keytracking and you've got, in essence, an EQ that always EQs out the same part of the spectrum in proportion with the note being played.

Edit: VarikusBrainz - you wait 2 hours for a bus then 2 come along at once :lol:

Post

BertKoor wrote:Yes, everything inbetween the source (stream of bits in your DAW) and target (your ears) have influence. This is a given fact you just have to deal with.

Say we are together in a room and you play the piano. I then hear the piano nearly the same as you do, but not exactly the same because I am not at your seat but a few foot away. On my listening position I get different reflections from the walls, different standing waves. Does that matter? Probably not.

Say I walk out of the room and close the door. I can still hear the same piano playing. The notes are exactly the same but the timbre has changed. There are significantly less high frequencies coming through that closed door. This is very simular to applying EQ. You can use EQ to simulate the sound as if it were you'd walked out of the room by cutting the high (and some mid) frequencies to get that muffled sound of the neighbour playing the piano.

Because I have knowledge of how a piano sounds close by, and I have experience with walking around the house, my brain might do some compensation uncounciously.

Another example: a telephone line does not transport the full audio spectrum, but is limited to about 8 kHz bandwidth. You can simulate that with EQ, and the timbre changes. But not the music itself.

The same applies if I listen to music. To some extent it doesn't matter whether I use a good hifi set or the dinky speakers on my laptop or through a telephone line, I can still enjoy the music even if the playback medium cannot reproduce certain frequency bands. Again, my brain might do some compensation uncounciously.
I agree with everything you have written. But my question was not about "how to recognize" things in sounds. For that, I know humans are awesome in pattern recognition, able to distinguish your voice from an heavy "manipulated" source. Or a violin from guitar within a church full of reverb.

But you are not doing that listening to music (from a listener prospective, not a producer one :P). You "just" got an auditory perception and feel/experience emotions/things when listening to a "song". This perception depends on both absolute fields (pitch and rhythm; which stay the same whatever environments I'll use) and relative (per environment) ones (timbre, related with loudness).

This means that a song is not a concrete thing (refeering to a single individual, not to everybody of course), since it will vary due to the place and tools I will use for play it back (thanks to colors variations). You can't make a singular color!

When you make a track, you are making a dynamic creature, not a static one. I'm realizing this... but still not sure if I'm correct. Thats why your opinions counts!!! :)

Post

Or maybe it is how Christian Schüler said:
Christian Schüler wrote:Why does white-balancing a picture doesn't change color?

EQ'ing a sound is very much like white-balancing a picture. Although the color balance has shifted a bit, the sky is still blue and the grass is still green.

Both EQ for sound and white-balance for pictures usually introduce very gentle and gradual changes to the spectral content. (A +12 dB EQ can be seen as "gradual" when compared to the fact that, for example, a musical instrument has some 60 or 90 dB contrast between spectral bands).

Factor in the adaptation of human hearing. The ear has an auto "white-balance" built in where it adapts to the global spectral envelope. Otherwise you wouldn't be able to recognize a voice, say, when muffled through a door. This will further lessen the perceived impact of an EQ.
So we are able to get and extrapolate the "qualia/concept/call whatever you want" of a timbre/color within your mind (i.e. perception of the "nature" of a sound) across different "representation" of the signal (i.e. the shaped sound we get in different environments).
Christian Schüler wrote: If however the white-balance would be so severe as to basically remove a color (comparable to an EQ completely removing a frequency band) then the nature of the source material would be changed. A quantity when changed over orders of magnitude acquires a new quality.
If the brain has this "white-balance" built in and its able to retrieve the nature of a sound across different spectrals, this mean that I'll really describing the nature of a synthesized sound (when design it) only when I really know the "response" of my room/speakers; this makes sense!!!

The dubt on this thinking path is: if so, why we prefeer to listen music in some speakers/headphones/rooms rather than others, if the color we get/extrapolate by our "white-balance built in" is the same? Or why I like to boost the bass of the EQ in my car when listening to a track?

Post

Hey guys...
... so no one would further partecipate to this? (aka Bump) :P

I've still lots of dubts about "how/what I listen to/from music" (yeah, it "sounds" very bizzarre).

Edit: putting the whole question with another terms.

When I read a book, the written letters can be shaped, sized, colored and spaced in infinite ways, reading them in different room lights, but I'll catch the "meaning" of letter (with more or less effort).

When I listen to a song/music, I'm not in the abstracting layer where a guitar (played with infinite colors added by environments) it "balanced" by brain and catched (as letter above): I just "enjoy" the sound how its "converted" from actual air pressures by brain, in that moment. i.e. a pair of speakers/headphones with huge bass and/or different dynamic response give to me "different letters" (even if similar);

I can switch brain to abstract how the original sound could be, but what I do for "enjoy music" is the "real-time" sound within my brain.

Or am I crazy?

Post

Nowhk wrote:Or am I crazy?
Not likely, it pops up regulary here. Last Tuesday JCJR wrote a pretty good post on the subject. In case you missed it:
viewtopic.php?f=102&t=485714&p=6806574#p6806574

Back on subject: if you brutalise audio with EQ... As long as your brain still is recognising what it should be, then you're hearing it like that: something which is processed. But if that connection cannot be made, then you'll hear it as something with it's own distinct timbre.

Some listeners automatically decompose what they hear into the separate instruments and might even recognise how it was produced (eg a piano with way too much compression), while others let it all just wash over their heads and get endulged into an inpenetrable Wall Of Sound.

We all hear differently, even from moment to moment. Funny thing, the brain...
We are the KVR collective. Resistance is futile. You will be assimilated. Image
My MusicCalc is served over https!!

Post

BertKoor wrote:Some listeners automatically decompose what they hear into the separate instruments and might even recognise how it was produced (eg a piano with way too much compression)
I had the attitude to think that this is the way I "elaborate" sound within my brain.
But if that's the way, why I appreciate more a sound from a pair of headphones/speakers rather than others? If I know both environments, I should be able to extrapolate the same "thing", the "distinct timbre/dynamics".

But I still prefer listen to music on pair A rather than B. Thus, this means that "I actually hear two different stuff" (one with more "punch" from the kick because of the reactive dynamics, heavy bass, and so on).

Post

Nowhk wrote:why I appreciate more a sound from a pair of headphones/speakers rather than others?
Because you appreciate better quality, but you need to learn to recognise what quality is first.

It's a bit like eating bread made in a factory all the time, you're accustomed to that. Only if you go eat bread made by a good bakery you notice the difference and realise what crap comes out of the factory.

Story time: so there was this nice song, playing on the radio all the time and I really liked it. So I bought the album (it was Skunk Anansi) and I played that album all the time. Then one day I heard it in my car on the radio again, and heard there was a difference: the dynamics were all squashed by the broadcasting processor. Unbearable... I never noticed that before I had bought the album, and I also did not notice it when I played the album for the first time.
We are the KVR collective. Resistance is futile. You will be assimilated. Image
My MusicCalc is served over https!!

Post

Sorry for the delay, I was in holiday :)
BertKoor wrote:
Nowhk wrote:why I appreciate more a sound from a pair of headphones/speakers rather than others?
Because you appreciate better quality, but you need to learn to recognise what quality is first.

It's a bit like eating bread made in a factory all the time, you're accustomed to that. Only if you go eat bread made by a good bakery you notice the difference and realise what crap comes out of the factory.
That's of course.

What I'm saying/discovering (and realizing :ud: :o ) is that "every listening" of a song/production will be different (due to environments, position where you are, and so on). And I'm talking about "perception", not "feeling/emotions".

For definition: Sound is defined by ANSI/ASA S1.1-2013 as "(a) Oscillation in pressure, stress, particle displacement, particle velocity, etc., propagated in a medium with internal forces (e.g., elastic or viscous), or the superposition of such propagated oscillation. (b) Auditory sensation evoked by the oscillation described in (a).".

If the air pressure always change a bit in whatever environments you are playing it (period), its also logical that the auditory sensation (perception) will change, accordingly. This also means that some components are changed for make it up the "current" perception; "timbre" for example will change (a bit). Also volume and such. The only costant I can think right now is the pitch perception, which you get unvaried.

Later our brain is awesome on abstract and extrapolate things (when you are elaborating them, due to your experience with sound and instruments), even if the "perception" is different. Such as compression of a piano, or which kind of osc techniques (fm/am/grain) is used, or which kind of guitar is used. You go deep and build things as pattern, we humans are beasts of this.

But what actually makes me "breathless" on listening music is the "sound" I'm listening at that moment (i.e. perception), which is different on every playback, sweep out every concept of concreteness!!! That's makes me confused...

The "reasoning" about musical productions/execution occurs in different stage, the "analysis" stage, which just don't excite me as "listening" :)

Post

One thing to watch out for is your ears being too tired to detect what is actually there.
I have done somewhat drastic EQing (and MS work, cutting the middle of the field down) with a track that hearing it a few days later showed clearly how drastic it was, and it was not a useful remastering or whatever at all.

Post

Hi there,

Not really shure what you are talking about jancivil.

Anyway, I'm still in dubt about "what you can preserve" and "what you just can''t" on music (if this phrase makes sense for you all).

For example, I think I can affirm that melody and harmory are perceived the same on every playback system/environments. But maybe neither them: just the blood pressure in our ears can makes differences; or the waterfall of the room will affect some partials that will mask/hide some frequencies.

Since someone has bump it up, I'll continue the discussion :D
BertKoor wrote:You can simulate that with EQ, and the timbre changes. But not the music itself.
Isn't this contradictory? Or you don't consider timbre part of a sound? If the timbre change, the sound change as well, thus the music "itself" change...

Post

Nowhk wrote: Not really shure what you are talking about jancivil.
I think you still not knowing what people are trying to tell you in this thread is a big part of your issue. You sure music is the right thing for you?

Post

Russell Grand wrote:You sure music is the right thing for you?
Yes I am. You sure you get what my dubt is?

Post

Nowhk wrote:Isn't this contradictory? Or you don't consider timbre part of a sound? If the timbre change, the sound change as well, thus the music "itself" change...
No. E.g. Eleanor Rigby or anything else played by the Beatles, by the BBC Philharmonic Orchestra and by a ukulele group will sound very different but it will still be the same music. And it will still be easily recognisable as the same music. Music transcends instrumentation, arrangement and trivia like timbre with no trouble.

Steve

Post

slipstick wrote:
Nowhk wrote:Isn't this contradictory? Or you don't consider timbre part of a sound? If the timbre change, the sound change as well, thus the music "itself" change...
No. E.g. Eleanor Rigby or anything else played by the Beatles, by the BBC Philharmonic Orchestra and by a ukulele group will sound very different but it will still be the same music. And it will still be easily recognisable as the same music. Music transcends instrumentation, arrangement and trivia like timbre with no trouble.

Steve
I see your point, but I'm not really on this kind of "religious" things :D

If that's really the truth, spending lots of money for hardware and pro audio gears would be stupid and futile. Even a pair of earbuds will transcend to the same music than...

A 20€ bedroom master would be the same by one made at Abbey Road studios.
Also, why use a Steinway piano instead of this?

Image
Sorry, I don't agree with this.
But that's just my point.

Post

Nowhk wrote:
slipstick wrote:
Nowhk wrote:Isn't this contradictory? Or you don't consider timbre part of a sound? If the timbre change, the sound change as well, thus the music "itself" change...
No. E.g. Eleanor Rigby or anything else played by the Beatles, by the BBC Philharmonic Orchestra and by a ukulele group will sound very different but it will still be the same music. And it will still be easily recognisable as the same music. Music transcends instrumentation, arrangement and trivia like timbre with no trouble.

Steve
I see your point, but I'm not really on this kind of "religious" things :D

If that's really the truth, spending lots of money for hardware and pro audio gears would be stupid and futile. Even a pair of earbuds will transcend to the same music than...

A 20€ bedroom master would be the same by one made at Abbey Road studios.
Also, why use a Steinway piano instead of this?

Image
Sorry, I don't agree with this.
But that's just my point.
Whether its recorded on a kids toy into a hissy cassette, or played back from the Abbey Road master tapes, it's still Eleanor Rigby. A great song and a great recording/performance of that song are very different things.

Feel free to use a kids toy rather than a Steinway piano if you think it'll result in a better performance/recording of your already great song. If that's the medium that's really gonna make your song shine and show it off at its best, go for it.

Perhaps a useful analogy would be learning English. If you learn English from a guy with a deep voice, does that mean you're unable to understand English when spoken by a woman with a high voice? It sounds different, it's a 'performance' you haven't heard before, but it's still English.
Last edited by cron on Fri Jul 21, 2017 3:02 pm, edited 1 time in total.

Post Reply

Return to “Music Theory”