Convolution confuses the h-ll out of me

VST, AU, AAX, CLAP, etc. Plugin Virtual Effects Discussion
RELATED
PRODUCTS

Post

I just don't get it. Convolution is fine for reverbs - but not for compression, distortion and modulation effects? But wasn't Focusrite Liquid Channels built on convolution? And Nebula.

/C
Last edited by DrGonzo on Thu Feb 23, 2012 5:14 pm, edited 1 time in total.
CLUB VICE for ARTURIA PIGMENTS
HARDWARE SAMPLER FANATIC - Akai S1100/S950/Z8 - Casio FZ20m - Emu Emax I - Ensoniq ASR10/EPS

Post

Focusrite Liquid Channel uses (i believe) dynamic convolution and Nebula Volterra Kernels.

Post

It's not fine for reverbs either as far as I'm concerned.

Here's my favorite commentary about convolution used for reverb, by Michael Carnes (Lexicon reverb designer):
http://www.gearslutz.com/board/4075701-post132.html
http://www.gearslutz.com/board/4075956-post140.html
"Music is spiritual. The music business is not." - Claudio Monteverdi

Post

Imagine convolution as sort of "snapshot" of the system. Simple convolution uses only one "snapshot" that does not change with respect to incoming signal properties -- completely different to what, say, a compressor does.
Liquid Mix (dynamic convolution) technology adds another dimension to that: different incoming levels would get different "snapshots".
Nebula is something even more advanced.

Post

Got it. Thanks for clearing it out for me.

/C
CLUB VICE for ARTURIA PIGMENTS
HARDWARE SAMPLER FANATIC - Akai S1100/S950/Z8 - Casio FZ20m - Emu Emax I - Ensoniq ASR10/EPS

Post

Shy wrote:It's not fine for reverbs either as far as I'm concerned.

Here's my favorite commentary about convolution used for reverb, by Michael Carnes (Lexicon reverb designer):
http://www.gearslutz.com/board/4075701-post132.html
http://www.gearslutz.com/board/4075956-post140.html
A bit of a conflict of interest issue with that commentary, I'd say.

I think convo for reverbs can run in parallel with algo and I do this sometimes. I tend to find it a bit dry and unrealistic on its own.
"You don’t expect much beyond a gaping, misspelled void when you stare into the cold dark place that is Internet comments."

---Salon on internet trolls attacking Cleveland kidnapping victim Amanda Berry

Post

A.M. Gold wrote:
Shy wrote:It's not fine for reverbs either as far as I'm concerned.

Here's my favorite commentary about convolution used for reverb, by Michael Carnes (Lexicon reverb designer):
http://www.gearslutz.com/board/4075701-post132.html
http://www.gearslutz.com/board/4075956-post140.html
A bit of a conflict of interest issue with that commentary, I'd say.
No, Carnes is 100% right. There is no argument. No conflict of interest. Nothing to see here. Move on.

:D

Barry Blesser has written some arguments against convolution reverbs, that are pretty convincing to me. Of course, it is worth noting that Barry Blesser created the algorithmic reverb used in the EMT250 (which was recently recreated by Universal Audio), and also recently patented a new allpass filter design which is now licensed by Lexicon. However, just because he may have a conflict of interest, doesn't mean he is wrong.

Sean Costello

P.S. I, of course, have no conflict of interest, and reside in a realm of pure objectivity and logic.

Post

Shy wrote:It's not fine for reverbs either as far as I'm concerned.

Here's my favorite commentary about convolution used for reverb, by Michael Carnes (Lexicon reverb designer):
http://www.gearslutz.com/board/4075701-post132.html
http://www.gearslutz.com/board/4075956-post140.html
not true. Try "theater of life" library for nebula. You can place your instruments accurately and it's *dynamic*

http://www.roomhunters.net/joomla/index ... &Itemid=56

Post

there isn't a conflict of interest regarding the basic fact he's stating.

the fact that a convolution measured from a single point at a single intensity measures that single point and single intensity only.

an algorithm may produce results that are very good at multiple points and intensities, and you might measure multiple points and intensities from a real space and record them as impulses as well.

never did he say anything except this one fact. it's just stepping in to call "bullshit!" when someone says something that is obviously not wholly true.

if you were to say that an impulse captures a complete space, that simply isn't correct. it captures only one source and one listener position in a fixed configuration within that space.

convolution isn't difficult to understand for those confused. here is an explanation:

step one: take an impulse. this is a series of zeros with a single one in them. 0, 0, 0, 1, 0, 0, 0,
step two: play this impulse through your system and record the result. it could be a filter of any sort with linear response. now you have recorded the effect applied by that filter.

step three: repeat the action of the filter on each impulse you want to apply the filter to. how do we do this?

it's simple. say you have a wave. something like 9, 12, 4, 51, 5. this is the same as 1, 1, 1, 1, 1, only with a different amplitude for each impulse. so we have 1,0,0,0,0,0, 0,1,0,0,0,0 and so on. we can pick out each one of these impulses.

step three explained: replace each impulse (sample) in the signal with the recorded impulse response from the filter scaled to the same amplitude as the original impulse.

so we get (IR * 9), (IR * 12), (IR * 4), etc.

now we could also have measured what we get if our impulse is 1, 5, 10, 25, 50. rather than using 9, we could use 10 at a slightly lower scaled amplitude. (IR10 * 0.9) = approximation of IR9.

i'm not sure what the other method includes, but i can think of many different ways you could measure variations on impulse content (frequency bands for example, or phase) and put those back in place of basic impulses from your signal after it has been filtered to bring out that data... but that all gets way too complicated.

the other methods, they might be something simple instead, i'm not sure.

that's convolution explained though!
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

the difficult thing beyond those simple methods is how to make the impulses of different amplitudes and phases interact as they do in the system they were measured from?

an example of a non-linear system with interaction: we should all have used an envelope follower at some point. we should all know that if you play two notes rather than one, you get a unique interaction of those two notes. the result is not the same as if you had played each note individually into the follower, then mixed the outputs together.

an impulse response measurement is going to be a simple recording of "one note". the real problem is how can we reproduce the effect we get when you play two notes at once?

one way would be to measure every possible combination of impulses. the size of the data would become impossibly large. it would also introduce enormous latency and processing cost.

also: an explanation of non-linear. a better term to use is a system with hysteresis.

http://en.wikipedia.org/wiki/Hysteresis
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

A.M. Gold wrote:
Shy wrote:It's not fine for reverbs either as far as I'm concerned.

Here's my favorite commentary about convolution used for reverb, by Michael Carnes (Lexicon reverb designer):
http://www.gearslutz.com/board/4075701-post132.html
http://www.gearslutz.com/board/4075956-post140.html
A bit of a conflict of interest issue with that commentary, I'd say.

I think convo for reverbs can run in parallel with algo and I do this sometimes. I tend to find it a bit dry and unrealistic on its own.
Weirdly enough I tend to feel the same. Thinking back - during all my year of making music, I'm not even sure if I've ever used a convolution reverb in an actual production. I always tend to find favourite algos (EOS is my favourite for the last two years) and stick with them. Bought NI Reflektor and haven't used it more than ten minutes or so.

Weird.
CLUB VICE for ARTURIA PIGMENTS
HARDWARE SAMPLER FANATIC - Akai S1100/S950/Z8 - Casio FZ20m - Emu Emax I - Ensoniq ASR10/EPS

Post

aciddose wrote:there isn't a conflict of interest regarding the basic fact he's stating.

the fact that a convolution measured from a single point at a single intensity measures that single point and single intensity only.

an algorithm may produce results that are very good at multiple points and intensities, and you might measure multiple points and intensities from a real space and record them as impulses as well.

acidose, he said a different thing
As I said before, a real orchestra is spread out in space. So is the audience. The impulse response from the second clarinet to seat 7C is different than the impulse response from the first clarinet to the same seat. A convolver effectively squishes the entire orchestra into a couple of points, quite accurately mimicking descent into a black hole.
which is misleading, you could use a simple convolver and choose a different impulse response for each clarinet, so you miminc the exact position in that way. Many ir libraries feature different positions. I remember even free libraries from noisevault.com

About what you are saying
acidose wrote: an algorithm may produce results that are very good at multiple points and intensities, and you might measure multiple points and intensities from a real space and record them as impulses as well.
nebula, which is in part based on nonlinear convolution can do that, exactly like liquidchannel does, and I think also a couple of convolvers, for example reverberate from liquidsonics.

And again
gearslutz wrote: And as Barry Blesser has pointed out in a paper (don't know if he published it), the impulse response of a space differs, depending on signal strength.
is still misleading, both nebula and liquidmix, sintefex and many others can do that. I pointed a library for nebula which features:

1) dynamic harmonic distortion
2) dynamic convolution
3) position of the virtual speakers

and that's better than what maybe even a classic algorithmic reverb does. Nebula is considered based on "convolution".
Again
gearslutz wrote: None of the convolvers that I'm aware of take this point under consideration.
and that's misleading. In 2009 both reverberate, nebula, liquidchannel, liquidmix, sintefex were released.

Post

There's no convolver ("dynamic" or not) that provides different responses for instruments that are playing together, nor for their different and/or varying intensities.

Even when applied on a single, "monophonic" instrument, the impulse response of "dynamic convolvers" is hardly "dynamic" compared to a real or algorithmic reverb and also depends on varying and "interpolating" between different static impulse responses and you never get an "accurate" response, just a weird sounding "dynamic snapshot" that isn't in any way more "accurate" than an algorithmic reverb, and I'd say even worse than a single static impulse response.

To get any kind of "randomness" (instead of the exact same impulse response used continuously) in the reverberation, which exists naturally in any reverb and isn't a ridiculous static snapshot, you also have to vary between impulse responses and you can't do that in any way that avoids totally weird results. This may work satisfyingly with some very short millisecond length impulse responses, it doesn't work with reverb. You can argue with that if you want, but I'd say the results and sensibility just speak for themselves.

Calling Michael's comments misleading doesn't give you any credibility at all. He obviously never intended to mislead nor said anything misleading, and coming mainly from the classical music field, he was talking about real people playing real instruments, so playing a single instrument each time in that context is completely out of the question.
"Music is spiritual. The music business is not." - Claudio Monteverdi

Post

Zaphod (giancarlo) wrote:acidose, he said a different thing
well, when i read it the result i get is exactly the same. :shrug:
Free plug-ins for Windows, MacOS and Linux. Xhip Synthesizer v8.0 and Xhip Effects Bundle v6.7.
The coder's credo: We believe our work is neither clever nor difficult; it is done because we thought it would be easy.
Work less; get more done.

Post

Shy wrote:
Calling Michael's comments misleading doesn't give you any credibility at all. He obviously never intended to mislead nor said anything misleading, and coming mainly from the classical music field, he was talking about real people playing real instruments, so playing a single instrument each time in that context is completely out of the question.
I'm speaking about an action. That couple of posts are misleading imoho and I explained why I think so. I'm not speaking about who posted it or you. I even wrote something stronger, but since english is not my primary language I fixed it a bit, in order to be sure about the message. I try to explain it better: someone could do something wrong, intentionally or not, it doesn't tell you anything about him. It could be a marketing approach, it could be a mistake. This is the spirit of a discussion, though. Misleading in my poor english means "it leads to confusion".
When you are speaking about me, using the word "credibility" your are moving things to a different level. So I'm out.

Post Reply

Return to “Effects”