## SVF filter saturation in feedback path / diode

DSP, Plug-in and Host development discussion.
xhy3
KVRist
48 posts since 19 Jan, 2014
mystran wrote:
Thu Jan 23, 2020 9:25 pm
xhy3 wrote:
Sun Jan 19, 2020 12:57 pm
Too bad that the SEM a1 didn't self oscillate(especially when used in a synth), but if i understand that right they were able to self oscillate and stay stable when forced with a high resonance value(a negative one)?
Let me try to explain this properly.

The transfer function is simply (a2*s^2+a1*s+a0*1)/(s^2+s/Q+1) where a2,a1,a0 are simply the gains you take from hp, bp and lp outputs. The denominator is what we are interested in.

We can rewrite R=1/Q where R is the BP feedback gain (my definition might be slightly different from Vadim's, but I feel like simple R=1/Q is the easiest to work with). As R goes to zero, the Q goes to infinity and for a linear filter, self-oscillation happens when R=0 -> Q=inf.

To get "forced self-oscillation" with any filter (which is unstable for linear filters), you want Q to be slightly "higher than infinity" which sounds funny, but sort of makes sense if you think in terms of extended real-line or extended complex-plane wrapping around through the point at infinity. As it turns out, in terms of R this happens simply when we cross zero and R becomes slightly negative. We can't do this by using a SEM-style variable resistor divider to ground, but it can be done even in analog with an extra inverting amplifier.

Now, in the SEM-style SVF the resonance pot (which is a variable divider to the ground) controls the feedback R when the BP signal amplitude is low. When the amplitude grows, the (separate) diode path (which we will assume is independent of the actual resonance setting) will pass-thru some additional damping to increase R and therefore decrease Q.

With the simple pot in the original R is basically limited to about [0,3]. The zero feedback when the resonance pot goes to ground. The factor of 3 happens because the main resonance path with pot turned to connect BP directly has only 33k resistor while the inverting HP opamp has 100k local feedback (hence gain of 100k/33k).

It is obviously trivial to vary R over a larger range in software, so if we take it to a slightly negative value, we get forced self-oscillation. If we assume the only actual non-linearity is the additional damping from the diode path, then the large-signal BIBO stability limit for self-oscillation is where the additional damping from the fully conducting diodes barely cancels out the main feedback with negative R.

In other words, with the diodes fully conducting, the combined damping must still be positive, but if the damping is negative at low amplitudes (when the diodes don't conduct), then the filter will regenerate until the amplitude is high enough that the diodes will start conducting enough to damp out additional regeneration.

Hopefully this clears up any confusion.
Thanks mystran for the clarification. I see now how it works.

Yes, it seems that the filter is unstable near the cutoff min and max. But it's not a big deal. Maybe it helps if i use 64 bit floating points. I will make some more tests.

Urs
u-he
24366 posts since 8 Aug, 2002 from Berlin
I've been revisiting the topic (vacation, yay!) and implemented the pivot method based on the diagram in Vadim's book (page 213, figure 6.52: An SVF with antisaturator).
xhy3 wrote:
Fri Jan 17, 2020 1:43 am
I think i stay with the s1 solution to calculate the pivot damping.
Hehehe, I'm confused here. Are you saying you use tanh( yBP' ) ≈ yBP' * tanh(s1)/s1 ?

Because that would almost certainly be wrong, since yBP' (see apostrophe) sits *behind* the arctanh()/sinh() term. You need to use the full term that sits inside the tanh() expression.

Here are my equations leading to BP which needs implicit solving:

Code: Select all

``````Low := BP*g + s2;
High := vIn - 2*((R - 1)*BP + arctanh(BP)) - Low;
Band := High*g - BP + s1;
``````

therefore

Code: Select all

``````Band := (vIn - 2*(R - 1)*BP - 2*arctanh(BP) - g*BP - s2)*g - BP + s1
``````
isolate(Band, arctanh(BP))

Code: Select all

``````arctanh(BP) = -(BP - s1)/(2*g) + vIn/2 - (R - 1)*BP - g*BP/2 - s2/2
``````
Now, taking tanh on each side:

Code: Select all

``````BP = -tanh((BP - s1)/(2*g) - vIn/2 + (R - 1)*BP + g*BP/2 + s2/2)
``````
using Mystran's pivotal method, you replace tanh with a pivotSlope:

Code: Select all

``````BP = -pivotSlope * ((BP - s1)/(2*g) - vIn/2 + (R - 1)*BP + g*BP/2 + s2/2);
``````
solving for BP to make explicit:

Code: Select all

``````BP = -pivotSlope*(g*s2 - g*vIn - s1)/(2*R*g*pivotSlope + g*g*pivotSlope - 2*g*pivotSlope + 2*g + pivotSlope)
``````
and updating pivotSlope for the next sample by taking

Code: Select all

``````tanhTerm = (BP - s1)/(2*g) - vIn/2 + (R - 1)*BP + g*BP/2 + s2/2;
pivotSlope = tanhTerm == 0 ? 1.f : tanh(tanhTerm)/tanhTerm;
``````
Note that self oscillation goes really loud with values of R < -0.01 already. But it works. I'd also experiment with additional tanh saturators in the OpAmps. There are a few more of those in the original SEM circuitry than first meets the eye. Those would bring the loudness of the self oscillation down a bit further.

Cheers,

- U

andy-cytomic
KVRAF
2271 posts since 3 Dec, 2008
mystran wrote:
Fri Jan 17, 2020 4:19 am

The band-pass feedback results in damping and in theory the filter self-oscillates when the BP feedback gain is zero. However with the diodes always conducting slightly, so the filter will never quite reach true self-oscillation, unless you actually force it with inverted feedback (which is not possible with a SEM-1a style setup with a resonance pot).

In practice, if you build the circuit, you might still see some self-oscillation at very high frequencies, which can probably be explained by non-ideal components (eg. excess phase from the op-amps or something).

...
There is an easy way to get SVFs to self oscillate which was used in the Oscar: place a large valued resistor in the positive feedback loop of the band pass stage:
OscarVCFA0802 pos feed.gif
In this schematic they are using an OTA for damping gain, but you could use a resonance pot instead like the SEM-1a.
You do not have the required permissions to view the files attached to this post.
The Glue, The Drop - www.cytomic.com

Urs
u-he
24366 posts since 8 Aug, 2002 from Berlin
The Jupiter 6 filter (2 x SVF based one their IR3109 OTA chips) also has a positive feedback path from BP to filter input (in the first stage only, second stage is slightly different). That looks like a quite simple way to offset the damping.

[edit: removed false assumption about diode pair after looking at the schematics again]

mystran
KVRAF
5607 posts since 12 Feb, 2006 from Helsinki, Finland
andy-cytomic wrote:
Tue Feb 18, 2020 9:37 pm
There is an easy way to get SVFs to self oscillate which was used in the Oscar: place a large valued resistor in the positive feedback loop of the band pass stage:
That seems like a neat trick, but I wonder... does it also slightly alter the zeroes of the high-pass output? I mean that probably won't matter too much for a simple synth filter, but one might have to take it into account when using such SVF-sections to build more complicated transfer functions..

andy-cytomic
KVRAF
2271 posts since 3 Dec, 2008
mystran wrote:
Wed Feb 19, 2020 12:47 am
That seems like a neat trick, but I wonder... does it also slightly alter the zeroes of the high-pass output? I mean that probably won't matter too much for a simple synth filter, but one might have to take it into account when using such SVF-sections to build more complicated transfer functions..
It's definitely an analog hack to the SVF for self oscillation, it's much easier in emulation land to have a bipolar amp for the damping. In analog the capacitor values are so far off from being matched it's not much of a difference anyway.
The Glue, The Drop - www.cytomic.com

Richard_Synapse
KVRian
953 posts since 20 Dec, 2010
I've been revisiting the topic (vacation, yay!) and implemented the pivot method based on the diagram in Vadim's book (page 213, figure 6.52: An SVF with antisaturator).
Great stuff Urs! You should do more vacation

Small correction to the third formula on the top, "Band" should read "BP", and "BP" should be removed (unless I'm missing something, but those seem to be the typical SVF equation).

Richard
Synapse Audio Software - www.synapse-audio.com

Urs
u-he
24366 posts since 8 Aug, 2002 from Berlin
Richard_Synapse wrote:
Tue Feb 25, 2020 6:10 am
You should do more vacation
I know...

I was actually contemplating to revive my blog with higher order examples, but d'oh... vacation ends in 5 days and I'm stuck with a couple of unresolved issues.
Small correction to the third formula on the top, "Band" should read "BP", and "BP" should be removed (unless I'm missing something, but those seem to be the typical SVF equation).
Hehe, sorry, this is stuff I do in Maple (as opposed to Maxima or Mathematica). The Band/High/Low with a ":=" are symbolic variables for everything that's on the right. All that's on the right needs to amount to zero int his case, i.e. it must not amount to what's left of the ":=". I'd normally write those equations like this:

Code: Select all

``````EQ1 := BP*g + s2 - LP;
EQ2 := vIn - 2*((R - 1)*BP + arctanh(BP)) - LP - HP;
EQ3 := HP*g - BP + s1;
``````
Here, all "N = expression" are written as "EQn := expression - N"

But then I need to perform extra steps to eliminate LP and HP first, if I want to isolate any term of BP, like this:

Screenshot 2020-02-26 at 11.51.54.png

Hence, while maybe confusing if seen from a C/C++ perspective, I thought it was easier to treat the first two equations like their symbolic value represents a numerical value and let them "automagically" substitute into the Band equation.
You do not have the required permissions to view the files attached to this post.

andy-cytomic
KVRAF
2271 posts since 3 Dec, 2008
Urs wrote:
Sun Feb 09, 2020 9:41 pm
...Here are my equations leading to BP which needs implicit solving:

Code: Select all

``````Low := BP*g + s2;
High := vIn - 2*((R - 1)*BP + arctanh(BP)) - Low;
Band := High*g - BP + s1;
``````
(edit: it looks like Urs already spotted the mistakes in the above, which I quoted, and fixed them in his post since I posted this)

This is still looking a little hard to follow. For everyone's sake I think It's easiest to start from the most basic forms of the equations (you can add back in the arctanh stuff later):

Code: Select all

``````HP == IN - k*BP - LP
BP == g*HP + s1
LP == g*BP + s2
``````
Then you can move the LHS of the equations across and assign them names:

Code: Select all

``````High := 0 == IN - k*BP - LP - HP
Band := 0 == g*HP + s1 - BP
Low := 0 == g*BP + s2 - LP
``````
Then it looks like for maple if it's just an expression, it defaults this to being equal to zero, but perhaps Urs can shed more light on this since he's the one using maple I've left the 0 == bit in there to be clear.
Last edited by andy-cytomic on Tue Feb 25, 2020 9:39 pm, edited 1 time in total.
The Glue, The Drop - www.cytomic.com

Urs
u-he
24366 posts since 8 Aug, 2002 from Berlin
Actually, I must have forgotten so much about Maple since I touched it last, a quick try reveals, it's totally possible to do it this way:

Screenshot 2020-02-26 at 16.21.49.png
You do not have the required permissions to view the files attached to this post.

andy-cytomic
KVRAF
2271 posts since 3 Dec, 2008
If people want to take things a step further you can also formulate the solution as an explicit parametric equation of two variables and use a 2D table interpolation to get the direct solution.

The solution to:

Code: Select all

``````HP == IN - k*BP - ArcTanh(BP) - LP
BP == g*HP + s1
LP == g*BP + s2
``````
for ArcTanh(BP) is:

Code: Select all

``````ArcTanh(BP) == (s1 + g (IN - s2) - BP (1 + g (g + k)))/g
0 == Tanh((s1 + g (IN - s2) - BP (1 + g (g + k)))/g) - BP
``````
This implicit equation can be renamed as:

Code: Select all

``````0 == Tanh(a + b BP) - BP

where:
a == (s1 + g (IN - s2))/g
b == -(1 + g (g + k))/g
``````
And hopefully it is now clear that the solution space is two dimensional, an offset 'a' and scaling 'b', both of which only depend on known information. The solution can be pre-computed and interpolated at runtime:

Code: Select all

``````a = (s1 + g*(IN - s2))/g;
b = -(1 + g*(g + k))/g;
BP = TableInterp(a, b);
LP = g*BP + s2;
HP = IN - k*BP - ArcTanh(BP) - LP;
// update s1 and s2
``````
Apart from the inversion of the non-linearity, this sort of approach to parameterise the solution space and pre-compute the solution and interpolate it in one step is called the DK-Method.
Last edited by andy-cytomic on Wed Feb 26, 2020 5:29 pm, edited 1 time in total.
The Glue, The Drop - www.cytomic.com

andy-cytomic
KVRAF
2271 posts since 3 Dec, 2008
Mystran pointed out that you can make the 2D input 1D output table into a 2D input 2D output table by having entries:

Code: Select all

``````TableInterp(a, b) = {0 == Tanh(a + b*BP) - BP, ArcTanh(BP)}
``````
This is then called a MIMO table (multiple input, multiple output). In circuit solving the DK-Method usually results in a MIMO table, where all non-linear terms are interpolated for like this.

This would make the final code:

Code: Select all

``````a = (s1 + g*(IN - s2))/g;
b = -(1 + g*(g + k))/g;
{BP, ATBP} = TableInterp(a, b);
LP = g*BP + s2;
HP = IN - k*BP - ATBP - LP;
// update s1 and s2
``````
The Glue, The Drop - www.cytomic.com

Urs
u-he
24366 posts since 8 Aug, 2002 from Berlin
I guess this practically only works as long as it's 2D tables, maybe 3D? I guess you wouldn't solve a 4-pole Moogy filter this way?

I guess I also accidentally re-invented the method in my non-linear one pole example

andy-cytomic
KVRAF
2271 posts since 3 Dec, 2008
Urs wrote:
Wed Feb 26, 2020 6:09 pm
I guess this practically only works as long as it's 2D tables, maybe 3D? I guess you wouldn't solve a 4-pole Moogy filter this way?

I guess I also accidentally re-invented the method in my non-linear one pole example

Yes, yes, I think you did Hey, it's a pretty basic idea when applied directly to specific equation sets like the one shown here and on your web page. The general solution for these non-linear systems requires a bit more work, which is what the DK-Method does, so it's nice to know the name so people can investigate further and see the general solution.

I've not looked specifically, but in the case of a Moog 4 pole non-linear low pass filter my gut feeling is you would get a 2D table for a single input non-linearity, and add dimensions from there for each extra non-linearity you want to model. This quickly breaks down for any reasonably complicated circuit, but there are other methods that can be applied.
The Glue, The Drop - www.cytomic.com

mystran
KVRAF
5607 posts since 12 Feb, 2006 from Helsinki, Finland
andy-cytomic wrote:
Wed Feb 26, 2020 5:21 pm
Mystran pointed out that you can make the 2D input 1D output table into a 2D input 2D output table by having entries:

Code: Select all

``````TableInterp(a, b) = {0 == Tanh(a + b*BP) - BP, ArcTanh(BP)}
``````
Actually, that's not quite what I had in mind, rather I was suggesting you could only tabulate atanh(BP), because as soon as you substitute the result into the equation for HP the system becomes linear and can be solved in the usual way.

On second thought though, rather than tabulating atanh(BP) directly, you could tabulate a fully converged linearisation. The potential advantage is that if the tabulated result is slightly inaccurate (ie. we don't quite hit the linearisation point, but fairly close to it), then this might allow for a reduced error.

In fact, this leads to the approach I've been suggesting (in PMs at least) in the past as well. While I still haven't tried this in practice yet, in theory you could always add extra dimensions for each of the non-linearities (which you probably should be doing with NR as well), factor the system (eg. using LU) until you only have the non-linear sub-block left (same as with Newton), but then rather than iterating until convergence, fetch either the direct result or the fully converged linearisation for the whole non-linear block from a pre-computed MIMO table.

Additionally, if the table lookup is used simply as a short-cut for NR, then it becomes straight-forward to throw in an extra NR iteration or two to further refine the results if desired. This could allow for a lower resolution table, while still avoiding the pathological cases with NR by always providing a good initial guess that (hopefully) guarantees quadratic converge straight away.

In any case, most of this is just random thoughts really.