Feedback Delay Network (FDN) Reverb

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Music Engineer wrote: Tue Mar 19, 2019 4:58 pm
I don't yet understand this Matrix multiplication, So basically the first row of the Matrix tells from which lines it get's it's input from?
yes, the first row is populated by the coeffs by which all the delaylines feed into the input of the first delayline. conceptually, you would have a matrix of coefficients, say a(i,j) that determines how much of the output of the j-th delayline feeds into the input of the i-th delayline (or maybe the other way around, depending on your conventions). the catch here is that matrix-vector multiplication is in general an O(N^2) process where N is the dimensionality of the vector (which in this case maps to the number of delaylines). that's why it's popular to use very specific matrices, for which this matrix-vector product can be computed more efficiently - for example in O(N*log(N)) in the case of a hadamard matrix. instead of straightforward, naive matrix multiplication, you do a fast hadamard transform. wikipedia even has python code for it:

https://en.wikipedia.org/wiki/Fast_Wals ... _transform

you would need to post-multiply the output by 1/sqrt(N), though - in this case, you get a unitary (i.e. energy preserving) transform. your FDN output signal would not decay away with this. if you feed in a unit impulse, it should produce some sort of white noise. ..a little more and it even blows up, a little less and you get a nice, exponential decay. so, in reality, you would use a factor s/sqrt(N) where s is some scaling factor < 1 that determines the decay time. and as next step then, you would throw in damping filters that make the decay time frequency dependent

disclaimer: this is a very simplified explanation - you may actually want a different scaling factor for each delayline, depending on its length and maybe a different filter, too - not sure anymore - it's long ago that i implemeneted an fdn
Thanks. I got it to work now, Hardest part was to figure out how to actually construct the feedback matrix if you want to switch number of delay lines quickly. There's still a lot of tweaking to do but the echoes are certainly more dense. Yea now that you think about it, using the 1/sqrt(N) makes quite sense.

Post

2DaT wrote: Wed Mar 20, 2019 5:43 am
mystran wrote: Wed Mar 20, 2019 5:03 am I'd just like to observe that this is also probably one of the easiest to compute matrices that doesn't sound completely terrible (unlike for example the super-cheap O(n) householder reflections).
It's not a "matrix" in a strict sense though. I like to call these operators "matrix-like" objects, and it makes a good abstraction. Basically, anything that can left-multiply a vector is a "matrix-like" object. FFTs, Householder reflections, Givens rotations, LU inverses, permutations, PDE operators, you name it.
Semantics.

Let me rephrase: ... this [the Hadamard matrix] is also a matrix that admits one of the easiest to compute multiplication algorithm [the Fast Walsh-Hadamard transform] among those matrices that don't sound completely terrible (unlike for example the Householder reflection matrices that admit an O(n) multiplication algorithm).

Happy now?

Post

HammieTime wrote: Wed Mar 20, 2019 7:45 am Thanks. I got it to work now, Hardest part was to figure out how to actually construct the feedback matrix if you want to switch number of delay lines quickly. There's still a lot of tweaking to do but the echoes are certainly more dense. Yea now that you think about it, using the 1/sqrt(N) makes quite sense.
You really want to avoid actually constructing a feedback matrix at all. Instead you want to take the signal vector and perform some computationally efficient (unitary) linear transformation on it. Every linear transformation is equivalent to a matrix multiplication by some suitable matrix (and you can stick an arbitrary unitary matrix into an FDN and get something that works), but general matrix multiplication is rather inefficient computationally, so you really want to avoid that in practice.

Post

mystran wrote: Wed Mar 20, 2019 7:48 am Semantics.

Let me rephrase: ... this [the Hadamard matrix] is also a matrix that admits one of the easiest to compute multiplication algorithm [the Fast Walsh-Hadamard transform] among those matrices that don't sound completely terrible (unlike for example the Householder reflection matrices that admit an O(n) multiplication algorithm).

Happy now?
I don't get why Householder reflection is terrible. IIRC, it has an even simpler algorithm (Matrix-vector).

Post

2DaT wrote: Wed Mar 20, 2019 9:24 am I don't get why Householder reflection is terrible. IIRC, it has an even simpler algorithm (Matrix-vector).
At least the type often suggested that can be computed simply as a sum of local and global feedback paths ends up with the problem that as the number of delay lines increases, the local feedback gets more and more dominant and it sounds more or less like a bunch of parallel comb filters. Even if you throw in a permutation, it doesn't mix (or "diffuse") the signals very well, unless the number of delays is very small.

Post

mystran wrote: Wed Mar 20, 2019 7:57 am
HammieTime wrote: Wed Mar 20, 2019 7:45 am Thanks. I got it to work now, Hardest part was to figure out how to actually construct the feedback matrix if you want to switch number of delay lines quickly. There's still a lot of tweaking to do but the echoes are certainly more dense. Yea now that you think about it, using the 1/sqrt(N) makes quite sense.
You really want to avoid actually constructing a feedback matrix at all. Instead you want to take the signal vector and perform some computationally efficient (unitary) linear transformation on it. Every linear transformation is equivalent to a matrix multiplication by some suitable matrix (and you can stick an arbitrary unitary matrix into an FDN and get something that works), but general matrix multiplication is rather inefficient computationally, so you really want to avoid that in practice.
Hello. Yea I understand that it's quite heavy. I did some cleaning up quickly and decided to see how each line output would be expressed as the previous output and got following results: y1=x1+x2+x3+x4, thus y2=y1-2*x3-2*x4 and following same idea for lines 3 and 4 yields: y3=y2+2*x2-2*x3 and y4=y3-2*x2+2*x4.

Is this what you mean by this? I'm a bit lost on the mathematical terminology.

Post

Does anybody use conference matrices ?

I use them extensively, they provide good scattering and avoid to feedback a delay line to itself.

https://wikimedia.org/api/rest_v1/media ... 1392f9ed4d

and they exist for all even order (as opposed to Hadamard that exist only for 4N).
See you here and there... Youtube, Google Play, SoundCloud...

Post

I've used H8 here to great success:
https://homepages.math.uic.edu/~leon/mc ... _codes.pdf
Each column is a delay line.
I apply it to each channel then leak a little between the channels for a stereo effect.

Post

Smashed Transistors wrote: Thu Mar 21, 2019 10:51 pm Does anybody use conference matrices ?
I use them extensively, they provide good scattering and avoid to feedback a delay line to itself.
hmm...interesting:
https://en.wikipedia.org/wiki/Conference_matrix
never heard of them before. do you think, it's desirable, to avoid self-feedback? i would think, the denser the matrix, the better and generally try to avoid zeros in the matrix - diagonal-or-not. but my experience is limited (i did not try very much). also - is there a fast (< O(N^2)) algorithm for the matrix-vector product in this case?
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

I have implemented some on the Axoloti and i think they are really worth a try.

I don't know if there is a fast NlogN method. But it is possible to design conference matrices that include some optimizable patterns.
D10.PNG
Such matrices are not easy to design by hand above order 6. So, I wrote a brute force java program to find this one.

https://soundcloud.com/thierry-rocheboi ... atrix-test
You do not have the required permissions to view the files attached to this post.
See you here and there... Youtube, Google Play, SoundCloud...

Post

Apparently if you have a skew conference matrix C, then C+I=H is a "skew" (ie. except for diagonal) Hadamard matrix.

This suggest an algorithm where you construct a suitable Hadamard matrix, hopefully using O(NlogN) operation, then zero out the diagonal (and renormalize). No idea if that works in practice, but it seems workable at first glance.

edit: note that while the standard fast Hadamard transform uses [[1,1],[1,-1]], you can also use the same scheme with other rotations/reflections. In practicular [[1,-1],[1,1]] or [[1,1],[-1,1]] will get you the "skew" property.

Post

yes - that sounds like a workable approach. i can't be bothered to work out the details right now, though. i'm still a bit sceptical, if this *additional* effort (compared to a straight hadamard matrix) is worth to spend on actually making the feedback matrix *less* dense. but maybe i'm too focused on density? dunno.
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

Smashed Transistors wrote: Sat Mar 23, 2019 11:44 am I have implemented some on the Axoloti and i think they are really worth a try.

I don't know if there is a fast NlogN method. But it is possible to design conference matrices that include some optimizable patterns.

D10.PNG
Such matrices are not easy to design by hand above order 6. So, I wrote a brute force java program to find this one.

https://soundcloud.com/thierry-rocheboi ... atrix-test
Could you help me please with 8x8 conference matrix? I've been googling for that and couldn't find it.
giq

Post

Music Engineer wrote: Sat Mar 23, 2019 10:55 am hmm...interesting:
https://en.wikipedia.org/wiki/Conference_matrix
never heard of them before. do you think, it's desirable, to avoid self-feedback? i would think, the denser the matrix, the better and generally try to avoid zeros in the matrix - diagonal-or-not. but my experience is limited (i did not try very much).
I could not find any perceptual advantages having zeroes in the diagonal, however I have not tried this particular matrix just yet. If we take a standard 8x8 matrix filled with +1/-1, the contribution of self-feedback would be only 1/8th anyway.

Richard
Synapse Audio Software - www.synapse-audio.com

Post

Richard_Synapse wrote: Sat May 29, 2021 10:56 am
Music Engineer wrote: Sat Mar 23, 2019 10:55 am hmm...interesting:
https://en.wikipedia.org/wiki/Conference_matrix
never heard of them before. do you think, it's desirable, to avoid self-feedback? i would think, the denser the matrix, the better and generally try to avoid zeros in the matrix - diagonal-or-not. but my experience is limited (i did not try very much).
I could not find any perceptual advantages having zeroes in the diagonal, however I have not tried this particular matrix just yet. If we take a standard 8x8 matrix filled with +1/-1, the contribution of self-feedback would be only 1/8th anyway.
The self-contribution for a 8x8 lossless Hadamard is 1/sqrt(8) = 9dB and there's a theoretical case to be made that avoiding direct self-feedback should reduce short-term correlation as the shortest feedback paths now has to involve at least 2 delays and 2 rounds of "diffusion" (hence twice the decay), but whether or not this is perceptually significant is obviously another question.

Post Reply

Return to “DSP and Plugin Development”