Bah, listening is so passé...sircuit wrote:It can be listened to, running audio through it, but half of the forum has tinnitus apparently.
We got Tools Measurement tools.
That sounds pretty much what I had though would be the test signals that would give results like the ones posted, thank you – I was going to look into the Doc plug myself but haven't gotten around to it yet. Those are valid approaches for the things it is designed to test – not for INTENSITY, or specifically the main algorithm of INTENSITY. If using ONLY the SATURATE section of INT, then the THD analysis would probably give valid results for that section (and yes, the saturate has THD of a very nice flava, and potentially looooots of it, too ).bbtr wrote: Just took a peek at the manual: for linear analysis Plugin Doctor uses Delta (first sample 1, all other 0) and Random (noise). For THD uses a sine wave, for IMD - two waves.
From my experience until now, the first thing is that the plugin is very unpredictible. I mean, you turn it on, start dialing intensity (until you hear "clarity", "details", something) but you have no idea if before reaching what you expect to hear: it will mush transients or bring unwanted reverb tails (harshness) or distort the kick or bring out other artefacts.mcbpete wrote:Would you say that the best results of the plugin are when Intensity is applied to:
a) Individual tracks
b) Sub-groups (e.g. a grouped collection of drum tracks)
c) Final 2-bus/mixbus
Also would this be applied before the final mastering or as part of the final mastering?
Cheers
I can't really see it "a great time-and-effort-saving" as long as I have no clue how much I have to blindly tweak the input (and subsequently adjust the output level) and the intensity and the dry/wet and the bias only to find out that the sound is not improving.noiseboyuk wrote:It seems like a pretty handy plugin to me in that it's a great time-and-effort-saving way to make stuff sound better (but not always, depending exactly on what the stuff is).
TDR made an awesome thing in all their plugins called "equal loudness bypass": what they do is permanently analyzing the input and output signals in the background. everytime you adjust a setting, an LED goes red to signal the fact that analysis is being performed and the bypass will not reflect the loudness. However in seconds the LED settles to green and as long as it stays green you can bypass knowing that the inut and output loudness match. If you change parameters the LED goes red for few seconds and so on.noiseboyuk wrote:Where I'm much more dubious is that the marketing around this leads you to believe that something amazing and revolutionary is going on when it really isn't, and the occasional suggestion that you can make stuff sound louder without affecting the LUFS which is I think completely wrong.
To that end, I'd definitely be in favour of real volume compensation monitoring. Obviously that has to occur at the end of the signal chain and really shouldn't be too hard. Just average loudness measure bypassed and non-bypassed, and compensate the non-bypassed accordingly. This should be understood as a monitoring feature for quick A/Bing. I don't know how it works in Ozone, but it really does there - do whatever they do!
Intensify uses facial recognition? What does the plugin 'see'? Does it recognize patterns in the waveform or the spectral graph?zynaptiq wrote:...which uses technologies also used in some types of facial recognition – assumes that there ARE broad strokes as well as details in the input.
It can work for any or all of those.mcbpete wrote:Would you say that the best results of the plugin are when Intensity is applied to:
a) Individual tracks
b) Sub-groups (e.g. a grouped collection of drum tracks)
c) Final 2-bus/mixbus
Also would this be applied before the final mastering or as part of the final mastering?
Cheers
What it "sees" are "big structures" and "small structures" in a special type of time-frequency domain representation; similar to a spectral graph in that it has the dimensions time, frequency, level, but not based on an FFT or similar.kiezum wrote:Intensify uses facial recognition? What does the plugin 'see'? Does it recognize patterns in the waveform or the spectral graph?zynaptiq wrote:...which uses technologies also used in some types of facial recognition – assumes that there ARE broad strokes as well as details in the input.
Why not use the technologies in voice recognition?
Sure, if it's not improving then no. But I've already heard enough examples to deduce that this will work for many scenarios very well. I think inherently it will be less interesting to those who like to get forensic with mixes and masters and have the skillset to go with it, and more appealing to those who either need stuff done in a hurry or lack specialist knowledge / experience.sircuit wrote:I can't really see it "a great time-and-effort-saving" as long as I have no clue how much I have to blindly tweak the input (and subsequently adjust the output level) and the intensity and the dry/wet and the bias only to find out that the sound is not improving.
That describes me very well ...noiseboyuk wrote: ... and more appealing to those who either need stuff done in a hurry or lack specialist knowledge / experience.
Isn't the sharpen of an image enhancing the edges and thus making the image "sharp"? In order for a hair to be visible the effect is to darken the hair while simultaneusly lighten the surrounding area. A sharp image hase more separation visually.zynaptiq wrote: We amplify the "Small structures" for the INTENSITY effect – it's similar to some image sharpeners.
© KVR Audio, Inc. 2000-2024
Submit: News, Plugins, Hosts & Apps | Advertise @ KVR | Developer Account | About KVR / Contact Us | Privacy Statement