Plug-ins, Hosts, Apps,
Hardware, Soundware
Developers
(Brands)
Videos Groups
Whats's in?
Banks & Patches
Music Search
KVR

KVR Forum » DSP and Plug-in Development
Cheap non-linear zero-delay filters
Goto page Previous  1, 2, 3 ... 13, 14, 15, 16  Next
Z1202
KVRist
 Posted: Fri Jun 15, 2012 12:42 am reply with quote
mystran wrote:
Random thought regarding "instantaneous instability": if one solves the system as homogeneous, then it doesn't matter if there are local instantaneous instabilities as long as all the denominators of the final solution are strictly positive (or strictly negative depending on which way you write the system).
I'm afraid I need to ask for a more detailed explanation. Why is it important to have a homogeneous system, which difference does it make? OTOH, the change of the denominator's sign basically corresponds to crossing the BLT's unstable pole, so I guess we're talking about the same here Not sure, if the sign can change once again further.
^ Joined: 11 Apr 2002  Member: #2472
Z1202
KVRist
 Posted: Fri Jun 15, 2012 1:54 am reply with quote
@mystran: I assume you have read the section about the instantaneous instability. What do you think about the "unstable BLT" idea? I still didn't find time to try this out myself
^ Joined: 11 Apr 2002  Member: #2472
mystran
KVRAF
 Posted: Fri Jun 15, 2012 2:33 am reply with quote
Z1202 wrote:
mystran wrote:
Random thought regarding "instantaneous instability": if one solves the system as homogeneous, then it doesn't matter if there are local instantaneous instabilities as long as all the denominators of the final solution are strictly positive (or strictly negative depending on which way you write the system).
I'm afraid I need to ask for a more detailed explanation. Why is it important to have a homogeneous system, which difference does it make? OTOH, the change of the denominator's sign basically corresponds to crossing the BLT's unstable pole, so I guess we're talking about the same here Not sure, if the sign can change once again further.

Well, it was a bit unprocessed and I'm not sure if it's correct to use such a sentence, but basically let us first consider the difference between unstable (grows without bound but with finite rate; we'll ignore this one) and instantly unstable (solution is the point at infinity). The instantly unstable situation is the singularity where 1/(1+g) is zero in your example.

However, if you work on the extended complex plane (ie open disc + point at infinite) then as long as the complete system isn't singular you can still find a solution and it's sufficient that the final solution is finite.

Alternatively, we can observe that given state-space A in most cases the numerical integration involves solving something like (I-A)*y=x (or some slight variant of the same) for y which gives (I-A)^-1*x=y (although for reasons of efficiency you probably don't want to do a full-inversion). Something like gaussian elimination works just fine on a ring, and you'll only need division (at the very end) when you need to convert the solution to a real (or complex) valued state vector. You might need some pivoting but that's it (assuming there's a unique solution; if there isn't any unique solution then you need more magic).
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
Z1202
KVRist
 Posted: Fri Jun 15, 2012 2:42 am reply with quote
mystran wrote:
Well, it was a bit unprocessed and I'm not sure if it's correct to use such a sentence, but basically let us first consider the difference between unstable (grows without bound but with finite rate; we'll ignore this one) and instantly unstable (solution is the point at infinity). The instantly unstable situation is the singularity where 1/(1+g) is zero in your example.
No, the instantly unstable situation is where 1+g<=0. The solution for 1+g<0 doesn't make much sense. Try it for a 1-pole with negative cutoff.

mystran wrote:
However, if you work on the extended complex plane (ie open disc + point at infinite) then as long as the complete system isn't singular you can still find a solution and it's sufficient that the final solution is finite.
But this solution may bear very little resemblance to what's happening in the analog case.

mystran wrote:
Alternatively, we can observe that given state-space A in most cases the numerical integration involves solving something like (I-A)*y=x (or some slight variant of the same) for y which gives (I-A)^-1*x=y (although for reasons of efficiency you probably don't want to do a full-inversion). Something like gaussian elimination works just fine on a ring, and you'll only need division (at the very end) when you need to convert the solution to a real (or complex) valued state vector. You might need some pivoting but that's it (assuming there's a unique solution; if there isn't any unique solution then you need more magic).
The same in the state space, except that it's more difficult to determine instantaneously unstable cases. They are not simply defined by |I-A|=0 or |I-A|<=0 , you need to apply some kind of stability analysis, e.g. using the analog 1-pole LPF method used in the book.
^ Joined: 11 Apr 2002  Member: #2472
mystran
KVRAF
 Posted: Fri Jun 15, 2012 3:11 am reply with quote
Z1202 wrote:
mystran wrote:
Well, it was a bit unprocessed and I'm not sure if it's correct to use such a sentence, but basically let us first consider the difference between unstable (grows without bound but with finite rate; we'll ignore this one) and instantly unstable (solution is the point at infinity). The instantly unstable situation is the singularity where 1/(1+g) is zero in your example.
No, the instantly unstable situation is where 1+g<=0. The solution for 1+g<0 doesn't make much sense. Try it for a 1-pole with negative cutoff.

I would still argue that a system where you get such a situation makes no more sense than the solution does. From the discussion in your book there is no reasons to believe this issue would ever exist in practice, except when using idealizations where an analog system relies on non-ideal behavior.

Oh, and zero-determinant is specifically the exact case you have specified as "instantaneously instable."
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
Z1202
KVRist
 Posted: Fri Jun 15, 2012 3:14 am reply with quote
mystran wrote:
Z1202 wrote:
mystran wrote:
Well, it was a bit unprocessed and I'm not sure if it's correct to use such a sentence, but basically let us first consider the difference between unstable (grows without bound but with finite rate; we'll ignore this one) and instantly unstable (solution is the point at infinity). The instantly unstable situation is the singularity where 1/(1+g) is zero in your example.
No, the instantly unstable situation is where 1+g<=0. The solution for 1+g<0 doesn't make much sense. Try it for a 1-pole with negative cutoff.

I would still argue that a system where you get such a situation makes no more sense than the solution does. From the discussion in your book there is no reasons to believe this issue would ever exist in practice, except when using idealizations where an analog system relies on non-ideal behavior.
Not sure what you mean by that, but you e.g. get this situation with a positive feedback in the Moog filter (which is often encountered with the Minimoog usage, corresponding to sending the synth's output to the external signal's input). Also a negative-cutoff 1-pole has a solution which makes total sense (although is probably not very practically usable).

mystran wrote:
Oh, and zero-determinant is specifically the exact case you have specified as "instantaneously instable."
I'm afraid you must point me to the specific passage. IIRC, I specified zero-determinant and everything beyond as instantaneously unstable.
^ Joined: 11 Apr 2002  Member: #2472
mystran
KVRAF
 Posted: Fri Jun 15, 2012 4:19 am reply with quote
Z1202 wrote:
mystran wrote:
Z1202 wrote:
mystran wrote:
Well, it was a bit unprocessed and I'm not sure if it's correct to use such a sentence, but basically let us first consider the difference between unstable (grows without bound but with finite rate; we'll ignore this one) and instantly unstable (solution is the point at infinity). The instantly unstable situation is the singularity where 1/(1+g) is zero in your example.
No, the instantly unstable situation is where 1+g<=0. The solution for 1+g<0 doesn't make much sense. Try it for a 1-pole with negative cutoff.

I would still argue that a system where you get such a situation makes no more sense than the solution does. From the discussion in your book there is no reasons to believe this issue would ever exist in practice, except when using idealizations where an analog system relies on non-ideal behavior.
Not sure what you mean by that, but you e.g. get this situation with a positive feedback in the Moog filter (which is often encountered with the Minimoog usage, corresponding to sending the synth's output to the external signal's input).

But this blow-up is an artifact of the linear solution, right? Such a linear-solution is unbounded in continuous time too. If you need the linear solution (like the method proposed in this thread, or some forms of naive iteration) to solve/approximate the non-linear solution, then it won't work; you will need a better method.

Basically I suspect that for a bounded system you can find "sensible" solutions exactly as long as you can keep the system bounded without aliasing. I'm not going to even try to proove this.

The negative frequency stuff: let 1/((s/w)+1) be the transfer function of an analog one pole. Let -a=w then we have 1/(1-(s/a)) or -1/((s/a)-1). This is all unstable stuff.
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
Z1202
KVRist
 Posted: Fri Jun 15, 2012 4:25 am reply with quote
mystran wrote:
But this blow-up is an artifact of the linear solution, right? Such a linear-solution is unbounded in continuous time too.
Yes, but in continuous time it explodes in finitely small time, whereas in discrete time it explodes in zero time. Of course the application is for nonlinear systems in the first place.
^ Joined: 11 Apr 2002  Member: #2472
mystran
KVRAF
 Posted: Fri Jun 15, 2012 4:41 am reply with quote
Z1202 wrote:
mystran wrote:
But this blow-up is an artifact of the linear solution, right? Such a linear-solution is unbounded in continuous time too.
Yes, but in continuous time it explodes in finitely small time, whereas in discrete time it explodes in zero time. Of course the application is for nonlinear systems in the first place.

Does this matter? You don't need the exploding linear solution, you need the bounded non-linear solution. Alternatively, can't you make it take finite time by adding a bit of high-freq damping (say above the audible range; even analog components like OPAs apparently need this kind of stuff for stability).
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
Z1202
KVRist
 Posted: Fri Jun 15, 2012 5:35 am reply with quote
mystran wrote:
Does this matter? You don't need the exploding linear solution, you need the bounded non-linear solution.
That's exactly my point. So, first you need to detect the instantaneously unstable cases. Second, trapezoidal integration (even after bounding by hand) might still provide garbage. That's what lead me to the idea of the "unstable BLT", don't know if it works in practice.

mystran wrote:
Alternatively, can't you make it take finite time by adding a bit of high-freq damping (say above the audible range; even analog components like OPAs apparently need this kind of stuff for stability).
That might be a good idea. Although I wonder if the phase response change introduced by this damping can affect the overall filter stability in a bad way (putting an LPF into the Moog filter's feedback destabilizes the whole thing e.g.)
^ Joined: 11 Apr 2002  Member: #2472
urosh
KVRist
 Posted: Fri Jun 15, 2012 6:44 am reply with quote
mystran wrote:
... adding a bit of high-freq damping (say above the audible range; even analog components like OPAs apparently need this kind of stuff for stability)

actually, just throwing some damping element in the loop usually makes things even worse.
[warning: irrelevant for actual discussion but might give you guys some ideas] on strictly ee playground (that is, design of stable analog circuits) this is typical approach: you first "unroll" loop and then examine loop gain; search in loop magnitude response point (frequency in this case) where gain is unity (in great majority of cases, for frequencies smaller than this gain is over unity and for higher frequencies it's below unity); then check phase of loop response at this point; phase has to be less than Pi (180 degrees) to keep closed loop circuit from oscillation, in ideal/best-possible case it's Pi/2 (90 deg); however, you normally impose stricter conditions (usually in terms of overshoot of step response). That means you introduce phase margin, that is difference between phase response at unity gain frequency and 180 degrees, and you strive for that phase margin. So, compensation is about massaging the phase to fit what you need.
Now for stability. Usually you have high DC gain - single dominant pole amp that you want to stabilise. In this case if you throw lag element (lets say 1p LP) in loop you will make things worse. What you actually try to achieve is to replace one of non-dominant poles with pole higher up the frequency. That is you try to shift phase response at unity gain frequency up. How do you do this? Put in loop element that has one zero at frequency of pole you try to eliminate and one pole higher up the frequency. If you check some schematic you often see for example inverting opamp configuration with capacitor parallel to resistor that goes from output to inverting input. In this case you actually add loop gain at higher frequencies (now feedback network works as boosted high shelf EQ). It helps because feedback networks now adds to phase response at critical range (amplitude loop response is shaped by dominant pole anyway). You will limit limit frequency response of closed loop inverting amp, but regarding stability (where you examine loop gain) you are not damping response at all.
BTW, one more example: lets say that you have classic SVF and you add this compensation capacitor around first stage (that sums input, LP out and BP out). If you put large enough cap SVF will start to oscillate, because you introduced new pole low enough in frequency that loop phase now exceeds 180 at critical point.
^ Joined: 03 Oct 2002  Member: #3997
mystran
KVRAF
 Posted: Fri Jun 15, 2012 10:15 am reply with quote
urosh wrote:
mystran wrote:
... adding a bit of high-freq damping (say above the audible range; even analog components like OPAs apparently need this kind of stuff for stability)

actually, just throwing some damping element in the loop usually makes things even worse.

Ofcourse you can't throw them around randomly, and fixing one problem is likely to cause another (hopefully less problematic) issue elsewhere.

What I was referring to was that if you have a theoretical ideal OPA without a finite slew-rate and you try to configure it as an inverting amplifier (or whatever) then at some sufficiently high frequency the transmission delay from the output pin to the input pin will become significant and the thing won't work anymore (probably would go oscillating from one rail to another). This seems similar to the instantaneous instability case; at sufficiently high frequencies it appears to me that the open-loop gain MUST go down or you can't really use it as an OPA (in the normal sense) anymore. [edit: does this make sense? am I making some mistake here?]

In your SVF case this ends up damping the (essentially negative) feedback around the two integrators, which is required to make a stable low-pass filter, so we reduce the stability of the larger circuit (the problem moves elsewhere). This (as far as I can tell) is just a "regular" instability though and in some sense inverse issue. The problem situation in the Moog-ladder case (presented as an example) is positive feedback which should be safe to damp (ie instantaneous instability problem case appears to be the case where we push the system too far into unstable operation on purpose).

Rest of what you posted seems to describe the cures for the "problem moves elsewhere" case.

edit: well, I guess it's necessary to consider phase-margins even in the simple inverting amplifier configuration.. but it still seems to me that the "instantaneous" part is simply to do with step response.
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
mystran
KVRAF
 Posted: Fri Jun 15, 2012 1:24 pm reply with quote
Here's some alternative approaches to solve the instant instability without modifying the system. I didn't really prototype any of them, so they are really just in the category of educated guesswork:

Approach 1. Suppose that solution is a/b (and for stable solutions b > 0) and the bounds (eg supply rails or whatever) are [-c,c]. Then one can cap the prediction to the bounds by using sign(a)*c (instead of a/b) if |a|>b*c. For iterative solver you should be able to arrive at b>0 for the next iteration. Even for explicit solver (eg single iteration or even none at all) you get something useable.

Approach 2. Can't remember where I picked up this one, but you can introduce some artificial damping when calculating the initial predictions. Reduce the damping for each iteration, such that in the limit (when the iteration hopefully converges) you have a solution with no additional damping and bounded iff the true solution is bounded.

Approach 3. Make your solver work with projective algebra (I hope that's what it's called).

It's also possible I still don't properly understand the issue.
----
<- my plugins | my music -> @Soundcloud
^ Joined: 11 Feb 2006  Member: #97939  Location: Helsinki, Finland
Z1202
KVRist
 Posted: Fri Jun 15, 2012 11:03 pm reply with quote
mystran wrote:
It's also possible I still don't properly understand the issue.
It's also possible that neither do I However, what I was trying to point out, is that the digital instantaneous instability may simply correspond to the fact that the trapezoidal integration is not precise enough, while in the analog case the system is (linearly) unstable, but not instantaneously. Therefore it's not simply about keeping the digital system from instantaneous explosion (which is relatively straightforward if the system contains saturators), but also trying to provide some reasonable approximation of the analog case. That's what I was trying to achieve with the mentioned "unstable BLT".
^ Joined: 11 Apr 2002  Member: #2472
urosh
KVRist
 Posted: Sat Jun 16, 2012 5:19 am reply with quote
mystran wrote:
What I was referring to was that if you have a theoretical ideal OPA without a finite slew-rate and you try to configure it as an inverting amplifier (or whatever) then at some sufficiently high frequency the transmission delay from the output pin to the input pin will become significant and the thing won't work anymore (probably would go oscillating from one rail to another). This seems similar to the instantaneous instability case; at sufficiently high frequencies it appears to me that the open-loop gain MUST go down or you can't really use it as an OPA (in the normal sense) anymore. [edit: does this make sense? am I making some mistake here?]

No, you are right. There is never ever ever instantenous blowup or instantenous anything in analog domain.
However, Vadim was examining different case (at least how I understand it). It's situation where you apply positive feedback in linear contionus time system. Instantenous instability is actually expected consequence of idealised contionus model. if you have system described as y(t) = x(t) +a*y(t) and you increase a from 0 y(t) will blow up (undefined actually) when a reaches 1. So, it's not behavior of discrete model, it's in nature of linear contionus model. So, as I see it, it's nto a problem of numerical solving, it's in very nature of contionous system that is solved. System blows up, so if numeric solution is exact it has to blow up as well.
In particular, Vadim examined one special case where positive real poles pop up when you apply positive feedback. So, he developed s->z mapping that handles such situation. Well, that's how I understood issue (which quite possibly didn't understood ).
Anyhow, unstable BLT has some interesting properties. It handles positive real poles of continous model. There is no numerical method I'm aware of that handels this case. Also, it maps some part of positive halfplane into unit circle (so I think I can say it's L-stable in stiffnes terminology). Also, pole at s=0 is mapped into stable (bounded) response. So it could come handy in some cases. For instance, if I understand correctly, your propostal that statred this thread is to treat nonlin system as linear system with modulated coefficients and then solve linearised version of the system. If linearised system has some poles shifted into right halfplane you have to handle somehow that situation.
BTW your first approach is introduction of nonlinear element to handle pathological case (hard clipper, becuase absolut bounds for variable is introduced). Second approach sounds very very very interesting, however if I understood everything correctly, problem is in contionus system not in approximation. Regarding projective algebra, I have no clue what is that

BTW, here is one interesting interpretation of unstable BLT. When you look at mapping you see that integrator (1/s) is mapped into lossy discrete integrator (gain at f=0 is not infinite). What's interesting is that "real" (electornic) inegrators actually have same propertie. For instance take basic opamp based integrator. Usually you say that transfer function is H(s) = 1/sRC. This is based on assumption that opamp has infinite gain (assumption used a lot, and quite accurate for most situations). If opamp has finite gain A, transfer function is A/(1+(1+A)*sRC). As you can see, this inegrator has similar properties as unstable BLT integrator.

sidenote: offcourse, there is never ever ever instantenous blow up in analog domain. When you analyse behaviour of some circuit you think about voltages and currents. Both are actually work of fiction (voltage is real thing in EM field theory, it's difference of field potentials in two points, but it is defined as property of 3D field; carriers move in a conductor but not in a way concept of current would imply, in particular no cariers ever flow through capacitor - or you have one crappy capacitor). Thing is, behavior of lets say resistor is governed by Maxwell equations for EM field. Working with Maxwell would get you nowhere with electronics, but some smart guys long ago figured out that you can introduce some concepts that would make thing much much much simpler. So you can say that you have this current that flows through the piece of long conductor and that property of this piece called resistance determines voltage between ends of conductor. And this works quite well actually as long as you follow some constraints. One of them is that concept holds as long as frequencies you are dealing with are small compared to physical dimensions of circuit. If they are not you get circuit with distributed parameters. You suddenly don't have nice conductors, so you have to introduce concept of transmision lines. And sometimes (when doing really really high speed design) even that is not good enough so you have to fire up some of the packages that actaully analyse circuit (in form of physical PCB model) by solving EM field equiations. Point is that you never ever have instantenous behavior of anything - at the end you will bump against light speed at least, but in real circuits lag elements start to appear much before that.
^ Joined: 03 Oct 2002  Member: #3997
 KVR Forum Index » DSP and Plug-in Development All times are GMT - 8 Hours Printable version Page 14 of 16 Goto page Previous  1, 2, 3 ... 13, 14, 15, 16  Next