## Feedback Delay Network (FDN) Reverb

DSP, Plug-in and Host development discussion.
HammieTime
KVRer
13 posts since 19 Mar, 2019
Music Engineer wrote:
Tue Mar 19, 2019 8:58 am
I don't yet understand this Matrix multiplication, So basically the first row of the Matrix tells from which lines it get's it's input from?
yes, the first row is populated by the coeffs by which all the delaylines feed into the input of the first delayline. conceptually, you would have a matrix of coefficients, say a(i,j) that determines how much of the output of the j-th delayline feeds into the input of the i-th delayline (or maybe the other way around, depending on your conventions). the catch here is that matrix-vector multiplication is in general an O(N^2) process where N is the dimensionality of the vector (which in this case maps to the number of delaylines). that's why it's popular to use very specific matrices, for which this matrix-vector product can be computed more efficiently - for example in O(N*log(N)) in the case of a hadamard matrix. instead of straightforward, naive matrix multiplication, you do a fast hadamard transform. wikipedia even has python code for it:

https://en.wikipedia.org/wiki/Fast_Wals ... _transform

you would need to post-multiply the output by 1/sqrt(N), though - in this case, you get a unitary (i.e. energy preserving) transform. your FDN output signal would not decay away with this. if you feed in a unit impulse, it should produce some sort of white noise. ..a little more and it even blows up, a little less and you get a nice, exponential decay. so, in reality, you would use a factor s/sqrt(N) where s is some scaling factor < 1 that determines the decay time. and as next step then, you would throw in damping filters that make the decay time frequency dependent

disclaimer: this is a very simplified explanation - you may actually want a different scaling factor for each delayline, depending on its length and maybe a different filter, too - not sure anymore - it's long ago that i implemeneted an fdn
Thanks. I got it to work now, Hardest part was to figure out how to actually construct the feedback matrix if you want to switch number of delay lines quickly. There's still a lot of tweaking to do but the echoes are certainly more dense. Yea now that you think about it, using the 1/sqrt(N) makes quite sense.

mystran
KVRAF
5248 posts since 12 Feb, 2006 from Helsinki, Finland
2DaT wrote:
Tue Mar 19, 2019 9:43 pm
mystran wrote:
Tue Mar 19, 2019 9:03 pm
I'd just like to observe that this is also probably one of the easiest to compute matrices that doesn't sound completely terrible (unlike for example the super-cheap O(n) householder reflections).
It's not a "matrix" in a strict sense though. I like to call these operators "matrix-like" objects, and it makes a good abstraction. Basically, anything that can left-multiply a vector is a "matrix-like" object. FFTs, Householder reflections, Givens rotations, LU inverses, permutations, PDE operators, you name it.
Semantics.

Let me rephrase: ... this [the Hadamard matrix] is also a matrix that admits one of the easiest to compute multiplication algorithm [the Fast Walsh-Hadamard transform] among those matrices that don't sound completely terrible (unlike for example the Householder reflection matrices that admit an O(n) multiplication algorithm).

Happy now?
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

mystran
KVRAF
5248 posts since 12 Feb, 2006 from Helsinki, Finland
HammieTime wrote:
Tue Mar 19, 2019 11:45 pm
Thanks. I got it to work now, Hardest part was to figure out how to actually construct the feedback matrix if you want to switch number of delay lines quickly. There's still a lot of tweaking to do but the echoes are certainly more dense. Yea now that you think about it, using the 1/sqrt(N) makes quite sense.
You really want to avoid actually constructing a feedback matrix at all. Instead you want to take the signal vector and perform some computationally efficient (unitary) linear transformation on it. Every linear transformation is equivalent to a matrix multiplication by some suitable matrix (and you can stick an arbitrary unitary matrix into an FDN and get something that works), but general matrix multiplication is rather inefficient computationally, so you really want to avoid that in practice.
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

2DaT
KVRist
359 posts since 21 Jun, 2013
mystran wrote:
Tue Mar 19, 2019 11:48 pm
Semantics.

Let me rephrase: ... this [the Hadamard matrix] is also a matrix that admits one of the easiest to compute multiplication algorithm [the Fast Walsh-Hadamard transform] among those matrices that don't sound completely terrible (unlike for example the Householder reflection matrices that admit an O(n) multiplication algorithm).

Happy now?
I don't get why Householder reflection is terrible. IIRC, it has an even simpler algorithm (Matrix-vector).

mystran
KVRAF
5248 posts since 12 Feb, 2006 from Helsinki, Finland
2DaT wrote:
Wed Mar 20, 2019 1:24 am
I don't get why Householder reflection is terrible. IIRC, it has an even simpler algorithm (Matrix-vector).
At least the type often suggested that can be computed simply as a sum of local and global feedback paths ends up with the problem that as the number of delay lines increases, the local feedback gets more and more dominant and it sounds more or less like a bunch of parallel comb filters. Even if you throw in a permutation, it doesn't mix (or "diffuse") the signals very well, unless the number of delays is very small.
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

HammieTime
KVRer
13 posts since 19 Mar, 2019
mystran wrote:
Tue Mar 19, 2019 11:57 pm
HammieTime wrote:
Tue Mar 19, 2019 11:45 pm
Thanks. I got it to work now, Hardest part was to figure out how to actually construct the feedback matrix if you want to switch number of delay lines quickly. There's still a lot of tweaking to do but the echoes are certainly more dense. Yea now that you think about it, using the 1/sqrt(N) makes quite sense.
You really want to avoid actually constructing a feedback matrix at all. Instead you want to take the signal vector and perform some computationally efficient (unitary) linear transformation on it. Every linear transformation is equivalent to a matrix multiplication by some suitable matrix (and you can stick an arbitrary unitary matrix into an FDN and get something that works), but general matrix multiplication is rather inefficient computationally, so you really want to avoid that in practice.
Hello. Yea I understand that it's quite heavy. I did some cleaning up quickly and decided to see how each line output would be expressed as the previous output and got following results: y1=x1+x2+x3+x4, thus y2=y1-2*x3-2*x4 and following same idea for lines 3 and 4 yields: y3=y2+2*x2-2*x3 and y4=y3-2*x2+2*x4.

Is this what you mean by this? I'm a bit lost on the mathematical terminology.

Smashed Transistors
KVRist
141 posts since 10 Oct, 2014
Does anybody use conference matrices ?

I use them extensively, they provide good scattering and avoid to feedback a delay line to itself.

https://wikimedia.org/api/rest_v1/media ... 1392f9ed4d

and they exist for all even order (as opposed to Hadamard that exist only for 4N).

quikquak
KVRian
522 posts since 6 Aug, 2005 from England
I've used H8 here to great success:
https://homepages.math.uic.edu/~leon/mc ... _codes.pdf
Each column is a delay line.
I apply it to each channel then leak a little between the channels for a stereo effect.

Music Engineer
KVRAF
3801 posts since 8 Mar, 2004 from Berlin, Germany
Smashed Transistors wrote:
Thu Mar 21, 2019 2:51 pm
Does anybody use conference matrices ?
I use them extensively, they provide good scattering and avoid to feedback a delay line to itself.
hmm...interesting:
https://en.wikipedia.org/wiki/Conference_matrix
never heard of them before. do you think, it's desirable, to avoid self-feedback? i would think, the denser the matrix, the better and generally try to avoid zeros in the matrix - diagonal-or-not. but my experience is limited (i did not try very much). also - is there a fast (< O(N^2)) algorithm for the matrix-vector product in this case?

Smashed Transistors
KVRist
141 posts since 10 Oct, 2014
I have implemented some on the Axoloti and i think they are really worth a try.

I don't know if there is a fast NlogN method. But it is possible to design conference matrices that include some optimizable patterns.
D10.PNG
Such matrices are not easy to design by hand above order 6. So, I wrote a brute force java program to find this one.

https://soundcloud.com/thierry-rocheboi ... atrix-test
You do not have the required permissions to view the files attached to this post.

mystran
KVRAF
5248 posts since 12 Feb, 2006 from Helsinki, Finland
Apparently if you have a skew conference matrix C, then C+I=H is a "skew" (ie. except for diagonal) Hadamard matrix.

This suggest an algorithm where you construct a suitable Hadamard matrix, hopefully using O(NlogN) operation, then zero out the diagonal (and renormalize). No idea if that works in practice, but it seems workable at first glance.

edit: note that while the standard fast Hadamard transform uses [[1,1],[1,-1]], you can also use the same scheme with other rotations/reflections. In practicular [[1,-1],[1,1]] or [[1,1],[-1,1]] will get you the "skew" property.
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

Music Engineer
KVRAF
3801 posts since 8 Mar, 2004 from Berlin, Germany
yes - that sounds like a workable approach. i can't be bothered to work out the details right now, though. i'm still a bit sceptical, if this *additional* effort (compared to a straight hadamard matrix) is worth to spend on actually making the feedback matrix *less* dense. but maybe i'm too focused on density? dunno.