Splitting Bands with Linkwitz-Riley

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Z1202 wrote:The question would be, is there any benefit of this higher-order SVF (I believe also referred to as canonical controllable form of a state-space system) compared to simply cascading multiple 2nd-order ones. Especially if the 2nd-order coefficients are already known and there is even no need to factor th polynomials. Particularly, IIRC some people reported precision/noise issues being worse with higher-order SVFs, not unlikely the direct forms.

Edit: ok, if they are modulated, you can save a division, since there is only one feedback path. But besides that?

Edit2: actually instead of dividing twice in the case of representing a 4th order filter as a product of two 2nd order ones, you could reciprocate the product of the denominators and multiply back by each of them in turn ;)
There may be some benefit even for LR4, but the main difference is for LR8. You only need 2xSVF4pole instead of 5xSVF2pole, so you save one SVF2Pole worth of computation. The internal computation of the solution has more terms for the SVF4Pole and appears to be more parallel in nature, but I've not done any rigorous profiling to see if this would make it more efficient.

I was mainly interesting in the SVF4Pole for an analog filter prototype I was considering for a eurorack module. I thought it would be cool to have a 4 pole SVF that you could use either as a regular resonant filter, but also a voltage controlled x-over :)
The Glue, The Drop - www.cytomic.com

Post

Z1202 wrote: Edit2: actually instead of dividing twice in the case of representing a 4th order filter as a product of two 2nd order ones, you could reciprocate the product of the denominators and multiply back by each of them in turn ;)
You can always solve an arbitrary order linear system with a single division using Cramer's rule, by computing the reciprocal of the determinant only once and you can always precompute this if the coefficients don't change. Usually you don't want to use this approach, because just about anything is more stable numerically (and LU is potentially faster as well, especially in the constant coefficient case where you can store LDU with reciprocal diagonal), but for some systems it works well enough.

Post

mystran wrote:
Z1202 wrote: Edit2: actually instead of dividing twice in the case of representing a 4th order filter as a product of two 2nd order ones, you could reciprocate the product of the denominators and multiply back by each of them in turn ;)
You can always solve an arbitrary order linear system with a single division using Cramer's rule, by computing the reciprocal of the determinant only once and you can always precompute this if the coefficients don't change. Usually you don't want to use this approach, because just about anything is more stable numerically (and LU is potentially faster as well, especially in the constant coefficient case where you can store LDU with reciprocal diagonal), but for some systems it works well enough.
Yes, I'm just usually trying to avoid Cramer's rule. Even before numeric stability concerns, there's its tendency to generate bulky expressions. So usually I'm trying to solve the system by hand, using the specific knowledge about the system's structure, rather than attempting to solve a flat linear system of equations. Not sure if this always results in the most efficient solution, would be interesting to compare.

Post

As for the generic pole and zero positions, I have been giving it some thought. So far quickly off the top of my head, without verification:

Given a filter of the form (P and Q are real polynomials of s)
H(s) = P(s)/Q(s) * P(-s)/Q(s) = P(s)P(-s) / Q^2(s)
(which includes squared Butterworth lowpass or high-shelfs arising out of Max M.'s naive lowpasses)
we can construct an allpass
AP(s) = Q(-s) / Q(s) = Q(s)Q(-s) / Q^2(s)
having the same phase response as H(s), since P(s)P(-s) and Q(s)Q(-s) have zero phase response.
The other crossover band is thus
G(s) = AP(s) - H(s)

Note that Q(s) doesn't have to be a polynomial, but can be a rational function with poles on the imaginary axis. In this case Q(-s) has the same poles and they cancel in AP(s) (thinking elliptic or Chebyshev type II filters).

In case of Butterworth L-R: P(s)=1, Q^2(s)=1+w^(2N)

Edit: probably it's easier for an elliptic/Chebyshev2 filter to put the zeros into P(s), since P(s)=P(-s) in this case.
Last edited by Z1202 on Thu Feb 22, 2018 4:43 pm, edited 1 time in total.

Post

So here is an attempt at a Chebyshev Type 1 L-R crossover (8th order):

In terms of the previous post
Q(x)=(x^2+0.4675272037899106*x+1.22669566949122)*(x^2+1.128710516167972*x+0.519588888304672)
P=Q(0)
cheb1.png
cheb1ph.png
The 180 phase difference in the left part is coming from the fact that the HP=AP-LP is actually "negative" in that range, since LP>=1. The LP is normalized to unit DC gain, the HP ripples actually could have been 3dB lower if LP was normalized to have alternating ripples around unity gain.
You do not have the required permissions to view the files attached to this post.

Post

andy-cytomic wrote:There may be some benefit even for LR4, but the main difference is for LR8. You only need 2xSVF4pole instead of 5xSVF2pole, so you save one SVF2Pole worth of computation.
Ah, I think I know what you mean now. But it was bugging me, that the same should be doable with 2-pole SVF. And actually it is.

So we want to obtain an 8-pole LP signal and a corresponding 4-pole AP signal, where ideally we would use two 2-pole SVFs to generate an LP4 and an AP4 signal and two others to turn LP4 into LP8. So let's concentrate on these first two SVFs, where we connect the LP output of the first SVF to the input of the second SVF. Let the denominators of their transfer functions be s^2+2R1s+1 and s^2+2R2s+1 respectively. So, the LP4 signal is 1/(s^2+2R1s+1)(s^2+2R2s+1). Using this denominator, let's write the numerators for all other signals we can pick from the modal outputs of these two SVFs:
2nd SVF LP: 1
2nd SVF BP: s
2nd SVF HP: s^2
1st SVF LP: (s^2+2R2s+1)
1st SVF BP: (s^2+2R2s+1)s
1st SVF HP: (s^2+2R2s+1)s^2

It's easy to see that these polynomials form a basis in the space of 4th order polynomials. Indeed, the first three are trivial basis vectors. Now we can subtract their linear combination from 1st SVF BP to turn it into s^3, making the fourth trivial basis vector. And then we subtract the linear combination of these 4 trivial basis vectors from 1st SVF HP, making the fifth trivial basis vector.

Thus, we can express any 4-th order transfer function's numerator via 2nd SVFs outputs and 1st SVFs BP and HP outputs. (In practice it could also make sense to use the input signals to both of the SVFs, as this gives additional options to simplify the math. Whether it will actually simplify it, needs to be seen).

Now I wonder if anybody spots a mistake.

Post

Z1202 wrote:
andy-cytomic wrote:There may be some benefit even for LR4, but the main difference is for LR8. You only need 2xSVF4pole instead of 5xSVF2pole, so you save one SVF2Pole worth of computation.
Ah, I think I know what you mean now. But it was bugging me, that the same should be doable with 2-pole SVF. And actually it is.

So we want to obtain an 8-pole LP signal and a corresponding 4-pole AP signal, where ideally we would use two 2-pole SVFs to generate an LP4 and an AP4 signal and two others to turn LP4 into LP8. So let's concentrate on these first two SVFs, where we connect the LP output of the first SVF to the input of the second SVF. Let the denominators of their transfer functions be s^2+2R1s+1 and s^2+2R2s+1 respectively. So, the LP4 signal is 1/(s^2+2R1s+1)(s^2+2R2s+1). Using this denominator, let's write the numerators for all other signals we can pick from the modal outputs of these two SVFs:
2nd SVF LP: 1
2nd SVF BP: s
2nd SVF HP: s^2
1st SVF LP: (s^2+2R2s+1)
1st SVF BP: (s^2+2R2s+1)s
1st SVF HP: (s^2+2R2s+1)s^2

It's easy to see that these polynomials form a basis in the space of 4th order polynomials. Indeed, the first three are trivial basis vectors. Now we can subtract their linear combination from 1st SVF BP to turn it into s^3, making the fourth trivial basis vector. And then we subtract the linear combination of these 4 trivial basis vectors from 1st SVF HP, making the fifth trivial basis vector.

Thus, we can express any 4-th order transfer function's numerator via 2nd SVFs outputs and 1st SVFs BP and HP outputs. (In practice it could also make sense to use the input signals to both of the SVFs, as this gives additional options to simplify the math. Whether it will actually simplify it, needs to be seen).

Now I wonder if anybody spots a mistake.
That was bugging me too, but I've given it a quick look over and it all looks good, and the plots all work great, nice work! I don't see any mistakes. I haven't implemented it yet to check but I know from experience it will be fine, especially with 2 pole sections cascaded, there isn't much to go wrong.

I have approached this sort of thing previously by solving directly for the coefficients of s in the numerator, and this has worked here just fine as well :) (In fact I did just this to form the AP2 response for the SVF4pole!)

The "missing" output when R1 != R2 is the allpass from the way Mystran previously described. It looks like this:

(1 - 2 R1 s + s^2) (1 - 2 R2 s + s^2)/(1 + 2 R1 s + s^2) (1 + 2 R2 s + s^2)

so, if we take what Vadim proposed and set up a numerator as the sum of all the outputs we have like this:
m5 1 +
m4 s +
m3 s^2 +
m2 (s^2 + 2 R2 s + 1) +
m1 (s^2 + 2 R2 s + 1) s +
m0 (s^2 + 2 R2 s + 1) s^2

Then solve the simultaneous equations for each coefficient of the powers of s to be equal to the ap4 response we want it will give us the values of m0, m1, m2, m3, m4, m5. We have an over specified system, which looks most neatly solved by settings m2 to zero (lp2), then we get:

{m0 -> 1, m1 -> -2 (R1 + 2 R2), m2 -> 0, m3 -> 1 + 8 R2 (R1 + R2), m4 -> 2 R2, m5 -> 1}

So the signal chain is:

---LR8---
SVF2pole1(in,R1 -> hp2,bp2,lp2)
SVF2pole2(lp2,R2 -> lp2hp2,lp2bp2,lp4)
SVF2pole3(lp4,R1 -> lp6)
SVF2pole4(lp6,R2 -> lp8)
ap4 = m0 hp2 + m1 bp2 + m3 lp2hp2 + m4 lp2bp2 + m5 lp4
hp8 = ap4 - lp8

I would guess this method will be directly applicable to the SVF4 as well for higher orders, so it will come down to profiling to really tell which is fastest. The trouble with the SVF2pole is still you need to cascade 4 in series, which is inherently not a parallel computation.

Knowing Mystran he probably already did all this and was keeping quiet (hehe), but thanks Vadim for following along what I've been saying and suggesting a better solution, so we can all now benefit from constructing an LR8 from 4 x SVF2poles :)

Note that in the above I've been using 2 R1 and 2 R2 like Vadim suggested, so if you want damping values directly then divide these by 2 and you'll get:
{m0 -> 1, m1 -> -R1 - 2 R2, m2 -> 0, m3 -> 1 + 2 R2 (R1 + R2), m4 -> R2, m5 -> 1}

This sort of thing just wouldn't work in analog, since the tollerances have to be so precise to make it happen, which is I think why Rane make a single SVF4pole. If anyone is using this stuff make sure to calculate as exactly as possible your R1 and R2 (or higher order) directly from the biquad equations at high precision, if they don't multiply out to give sqrt(2) then they won't cancel and you won't get a very good high pass output from your crossover!

I've got a product coming up which needs some decent crossovers, so this is great timing. Thanks to both Mystran and Vadim for your excellent contributions, and I hope others will find my posts useful on how to put it all together to make a final working crossover.

(edit: updated lp4 to lp6 in SVF2pole4(lp4,R2 -> lp8) above)
Last edited by andy-cytomic on Mon Mar 05, 2018 1:36 am, edited 1 time in total.
The Glue, The Drop - www.cytomic.com

Post

Z1202 wrote:So here is an attempt at a Chebyshev Type 1 L-R crossover (8th order):

In terms of the previous post
Q(x)=(x^2+0.4675272037899106*x+1.22669566949122)*(x^2+1.128710516167972*x+0.519588888304672)
P=Q(0)
...
The 180 phase difference in the left part is coming from the fact that the HP=AP-LP is actually "negative" in that range, since LP>=1. The LP is normalized to unit DC gain, the HP ripples actually could have been 3dB lower if LP was normalized to have alternating ripples around unity gain.
Thanks for sharing! It looks like a elliptical filter will be the best option here since that way the ripple will be on both sides equally in the stop band (and pass band), which no doubt would probably have already popped out at you looking at the graph :)
The Glue, The Drop - www.cytomic.com

Post

andy-cytomic wrote:Thanks for sharing! It looks like a elliptical filter will be the best option here since that way the ripple will be on both sides equally in the stop band (and pass band), which no doubt would probably have already popped out at you looking at the graph :)
Yes, the ripples were bothering me too a bit. Particularly the fact that you need to have really small ripples in the passband to have reasonably small ripples in the stopband, which turns Chebyshev I almost into Butterworth. So I started thinking about a mixture between elliptic and Butterworth, using denominators of the form 1 + (x^N * R(x))^2 (where R(x) is an elliptic rational function). :D Dunno if such filters were attempted before and if not, why. Either they could have a problem that is not obvious at the first sight, or maybe that the sole fact that analytic solution formula doesn't exist in this case and you need to find the poles numerically, which nowadays shouldn't be much of a problem.

The other concern is the lack of symmetry between LP and HP. I guess by properly tuning the selectivity of the elliptic filters one could balance the ripple amplitudes, but I'm still not sure if this gives fully symmetric shapes.
Last edited by Z1202 on Fri Feb 23, 2018 8:55 am, edited 2 times in total.

Post

andy-cytomic wrote: m5 1 +
m4 s +
m3 s^2 +
m2 (s^2 + 2 R2 s + 1) +
m1 (s^2 + 2 R2 s + 1) s +
m0 (s^2 + 2 R2 s + 1) s^2
Thanks for working our the coefficients! Pretty sure many people would find this useful.

One small note: the fourth polynomial is a linear combination of the first three (corresponding to the fact that the LP output of the first SVF is the input of the second one), so in principle you can immediately let m2=0 (in the sense of discarding this polynomial from the basis list altogether in the very beginning). Unless you're trying to see if using it can give a simpler expression, possibly saving an addition or a multiplication somewhere (which I doubt is the case for the allpass we're building here), but in this case one should also check if the polynomial (s^2+2R1s+1)(s^2+2R2s+1), corresponding to the unprocessed signal, could help making things simpler as well.
Last edited by Z1202 on Fri Feb 23, 2018 8:51 am, edited 1 time in total.

Post

andy-cytomic wrote:I would guess this method will be directly applicable to the SVF4 as well for higher orders, so it will come down to profiling to really tell which is fastest. The trouble with the SVF2pole is still you need to cascade 4 in series, which is inherently not a parallel computation.
The maximum parallelism would be probably attained by using diagonalized state-space form, which should work here reasonably well, since the poles are pretty distinct from each other. I was considering this as an easy solution for the multimode AP4/LP4 outputs, but as I wasn't concerned with parallelism that much at that point I discarded the idea in favor of the serial SVF chain. However, if parallelisation is a strong concern, I would expect that this should work pretty well.

Edit: or one could even use an 8-pole diagonalized state-space to produce AP4 and LP8, for full parallelization of everything :D
Edit2: oops, for the squared denominator it's not diagonalizable, as we have coinciding poles. Thus we have to use Jordan normal form instead, which in terms of parallelization would be the same as chaining AP4/LP4 state-space diagonalized 4-pole multimode (which should diagonalize just fine) with another LP4.

A note for general informational purposes (for whoever would be interested): the diagonalized state-space form tends to get ill-conditioned when the poles get close to each other. As the poles become equal to each other the ill-conditioning becomes infinitely strong. Jordan normal form, which is a generalized diagonalization, addresses this infinitely strong ill-conditioning by "suddenly jumping" to a "different topology" when the poles are equal. However it doesn't address the finite ill-conditioning case of closely positioned poles.

Post

Z1202 wrote:The maximum parallelism would be probably attained by using diagonalized state-space form, which should work here reasonably well, since the poles are pretty distinct from each other. I was considering this as an easy solution for the multimode AP4/LP4 outputs, but as I wasn't concerned with parallelism that much at that point I discarded the idea in favor of the serial SVF chain. However, if parallelisation is a strong concern, I would expect that this should work pretty well.

Edit: or one could even use an 8-pole diagonalized state-space to produce AP4 and LP8, for full parallelization of everything :D
Edit2: oops, for the squared denominator it's not diagonalizable, as we have coinciding poles. Thus we have to use Jordan normal form instead, which in terms of parallelization would be the same as chaining AP4/LP4 state-space diagonalized 4-pole multimode (which should diagonalize just fine) with another LP4.

A note for general informational purposes (for whoever would be interested): the diagonalized state-space form tends to get ill-conditioned when the poles get close to each other. As the poles become equal to each other the ill-conditioning becomes infinitely strong. Jordan normal form, which is a generalized diagonalization, addresses this infinitely strong ill-conditioning by "suddenly jumping" to a "different topology" when the poles are equal. However it doesn't address the finite ill-conditioning case of closely positioned poles.
I'm more concerned with preserving the analog structure so modulation sounds better as well as it being numerically more stable so it can run all f32, but doing this in as parallel a way as possible only for efficiency of computation. I have a feeling that multiplying out the diagonalised state space form will not lead to a more efficient implementation if it results in larger terms for each transformed section. When I have some more time I'll do so number crunching and see how diagonalised state space form performs in a musical context.
The Glue, The Drop - www.cytomic.com

Post

andy-cytomic wrote:I'm more concerned with preserving the analog structure so modulation sounds better as well as it being numerically more stable so it can run all f32, but doing this in as parallel a way as possible only for efficiency of computation.
Given that the (real) diagonalized form is just a number of parallel SVFs (although you could use any other 2-poles of course), I'm not sure which modulation problems you are concerned about. Also, if I'm not mistaken, given that the cutoff is controlled by pre-integrator gains, different topologies should sound identical even in case of modulated cutoff.

As for numeric issues, I do have the same concern too, but I actually wouldn't expect this to be an issue here, since the poles are pretty distinct.

Post

Z1202 wrote:Given that the (real) diagonalized form is just a number of parallel SVFs (although you could use any other 2-poles of course), I'm not sure which modulation problems you are concerned about. Also, if I'm not mistaken, given that the cutoff is controlled by pre-integrator gains, different topologies should sound identical even in case of modulated cutoff.

As for numeric issues, I do have the same concern too, but I actually wouldn't expect this to be an issue here, since the poles are pretty distinct.
I have no experience with diagonalised state space representations, which is why I raised general concerns. If the coefficient calculations aren't too expensive given that the crossover will mostly remain in a fixed position then being able to compute then entire crossover with parallel second order svfs would certainly result in low cpu usage, so I'll definitely check it out.

I'm rarely able to consider such options as I'm almost always modelling non-linear circuits where you can't change the structure at all! Thanks for your continued input on this topic.
The Glue, The Drop - www.cytomic.com

Post

I have to apologize for somewhat misleading terminology. Actually, the standard version of real diagonalized forms is using so-called (again, if I'm not mistaken) coupled resonators. One of the most important aspects of it, however, for me, is the parallelization of all 2nd-order terms. In the same way the complex diagonalized form is a parallelization of 1st-order terms. The real and complex forms are related by a coordinate transformation. And there you actually can do different coordinate transformations of the 2-poles, where two complex 1-poles are combined into any 2-nd order topology, including SVF rather than into a coupled resonator.

I guess you even don't need to explicitly diagonalize the matrix (especially for low order systems), because the 2nd-order terms will be the same as in serial implementation (after all we need the same denominator when they add). So you can just take a bunch of parallel 2-poles and solve for the mixing coefficients (I guess for SVFs you'll need LP and BP outputs, plus the unprocessed input signal).

Post Reply

Return to “DSP and Plugin Development”