Cheap non-linear zero-delay filters

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

urosh wrote:
mystran wrote:Trapezoidal shares this property (A-stability), but additionally maps unstable poles to unstable poles, which is important when you want self-oscillation.
I have this system of two diff. equations (you can say it's cousin of a filter) that burst into chaos behavior when you try to solve it numerically. And continuous system is most certainly not chaotic. I wouldn't mind chaotic behavior but with some parameter values it grinds to a halt ( stabilise at some constant value).
BTW poles and frequency response have meaning for dominantly linear systems. If you have system with strong nonlinearity I guess you would have to envision some different way to characterize desired behavior of discrete system.
I suspect your problem is that since TR is implicit, you can only solve it "exactly" for a strictly linear system or given an oracle (ie you can certainly evaluate TR for systems if you know the exact solution in advance). Naturally if you can find a convergent iterative solution, you can get arbitrarily close. In any case you have to consider the errors of any approximations and how they might affect the system in question.

As for implicit Euler, since significant parts of the right-hand side of s are stable, it might not go unstable from errors that will push the poles into oscillation with trapezoidal method.

Post

Richard_Synapse wrote:
mystran wrote:Considering the cost of calculating either is pretty much the same, I can't really see any property whatsoever that would favor Implicit Euler.
Since Trapezoidal is a combination of Forwards and Backwards Euler, I suppose it can blow up when trying to solve stiff systems.
Nah, it's sufficient for a method to be A-stable, since that guarantees that anything stable gets mapped to something stable.

My understanding is that for practical purposes any normal filter is stiff. Even for a simple one-pole we're forced to use step-size (ie samplerate) such that the z-plane poles stay within unit circle. Normally in DSP one would map the s-plane pole 1/(s+a) to z-plane pole 1/(1-exp(-a)*z^-1) using impulse-invariant or something similar which eliminates the problem.

[edit: actually I need to think about this a bit more.. it appears to be more complicated than it first seems.. but I'm still fairly confident that pretty much any filter can be safely considered stiff.. anyway the "A-stability" is still sufficient to get rid of the problem].

Post

Actually the whole stiffness thing sucks: basically I guess you can't even claim that a filter is "stiff" rather it's the ODE that implements a filter that is stiff: ie you can formulate a solution in the form of implicit euler that is stable, but if you apply implicit euler to the ODE directly it might blow up... or something. :P

Post

http://www.scholarpedia.org/article/Stiff_Systems seems like a pretty good discussion about the whole thing (and less confusing than the Wikipedia version).

Post

mystran wrote:Nah, it's sufficient for a method to be A-stable, since that guarantees that anything stable gets mapped to something stable.
We don't always know the exact stability properties. For complex nonlinear systems that might be tough to derive, but I'm no expert on this. FWIW there was a DAFX paper with a Moog ladder stability analysis recently (haven't read it though).

Richard
Synapse Audio Software - www.synapse-audio.com

Post

Richard_Synapse wrote:
mystran wrote:Nah, it's sufficient for a method to be A-stable, since that guarantees that anything stable gets mapped to something stable.
We don't always know the exact stability properties. For complex nonlinear systems that might be tough to derive, but I'm no expert on this. FWIW there was a DAFX paper with a Moog ladder stability analysis recently (haven't read it though).

Richard
Actually we often (I'd say usually) don't know them even for linear systems if we take time-varying aspects into the account. The classical stability criterion is developed only for the LTI case, which assumes both linearity and time-invariance. There is a sufficient stability criterion for the time-varying linear case, but typically it's condition doesn't seem to be fulfilled for music DSP filters (not sure, I just briefly tried to analyse these matters once).

Regards,
{Z}

Post

miedex wrote:@Vadim just curious if you or Stephan ever maybe update the Reaktor Advanced Filters in Core User Library package with the new filter design?
Frankly speaking, I'd rather people use my book/articles and the accompanying Reaktor Core tutorial and do this themselves. I don't exactly feel providing all variations of these filters to be my mission :)

Edit: actually, not sure if the original question is a misunderstanding. The Advanced Filters package is a collection of TPT filters. I thought you're referring to making another version based on mystran's idea for solving for nonlinearities.

Post

Z1202 wrote:
Richard_Synapse wrote:
mystran wrote:Nah, it's sufficient for a method to be A-stable, since that guarantees that anything stable gets mapped to something stable.
We don't always know the exact stability properties. For complex nonlinear systems that might be tough to derive, but I'm no expert on this. FWIW there was a DAFX paper with a Moog ladder stability analysis recently (haven't read it though).

Richard
Actually we often (I'd say usually) don't know them even for linear systems if we take time-varying aspects into the account. The classical stability criterion is developed only for the LTI case, which assumes both linearity and time-invariance. There is a sufficient stability criterion for the time-varying linear case, but typically it's condition doesn't seem to be fulfilled for music DSP filters (not sure, I just briefly tried to analyse these matters once).
I believe it doesn't matter whether you consider non-linear or time-varying problems, because you can always treat a non-linear problem as a time-varying problem (that's essentially what my proposed method does). Similarly you can convert a time-varying problem into a ("time invariant") non-linear (multiple-inputs) problem just by implementing the control signals as part of the system.

edit: essentially this means that essentially something is or isn't LTI.. and if it isn't LTI it doesn't matter much how exactly it violates the LTI assumptions

Post

After thinking for a day, my best proposal for a practical definition of stiffness is that a problem appears stiff if a stable system can become unstable when an evaluation of a step of the numeric solution overshoots the exact solution (and probably we require that explicit methods will actually overshoot too).

System as simple as a single decaying exponential (ie the famous "test equation") fulfills the above. If an explicit method (eg explicit Euler) uses the derivative at y[n] to estimate y[n+1] we will overshoot the solution. If we overshoot too much (ie the step-size is too large compared to the time-constant of the decay) the system can become unstable because the derivative doesn't necessarily decay anymore.

On the other hand, an implicit solution using derivate at y[n+1] is immune to the stability problem, because it will under-estimate the true solution. The more we under-estimate, the more the decay will "lag" (ie time-constant becomes longer than it should be).

If we use an implicit method as a corrector for an explicit predictor, we hope to reduce the overshoot at every iteration of the corrector, until we hopefully converge close enough to the true solution that the result is stable again.

Anyway, if we want predictable self-oscillation, then we have the inverse problem: we don't want an unstable system to become stable (or alternatively we don't want the stable time-reversed system to become unstable). This would happen if we under-estimate too much. So ideally we would want to never overshoot (for stable systems), but never under-estimate either (for unstable systems).

It appears that the only (linear multi-step) solution that always has this property is the trapezoidal method which is A-stable in both forward and reverse time (edited; see later comments for discussion.. and yes this is a confusing definition and not very exact).

Anyway, this (assuming the above is strict enough) leads to something else that seems useful: if we can approximate (eg using a time-varying linear-system) some non-linear system and integrate the approximate system in such a way that the resulting solution doesn't "overshoot" the original non-linear system, we should be immune to any any stiffness problems. Alternatively we can overshoot the non-linear system, but remain stable as long as the approximation itself is stable and we don't overshoot the approximate system.

It's probably too strict to actually require that we never overshoot: if we can make sure that the overshoot is bounded (even with increasing step sizes) and this bound is strict enough to require stable systems to decay, then we can still use a method that overshoots... but I'm pretty sure there no such linear multi-step methods. This is useful for iterative solvers though, because iteration can be given up as soon as the remaining error is safe. [edit: this essentially converts the step-size requirement into iteration requirement though, so there's probably very little to be gained CPU wise]
Last edited by mystran on Thu Jun 14, 2012 12:05 am, edited 3 times in total.

Post

mystran wrote:...
Anyway, if we want predictable self-oscillation, then we have the inverse problem: we don't want an unstable system to become stable. This would happen if we under-estimate too much. So ideally we would want to never overshoot, but never under-estimate either.

It appears that the only (linear multi-step) solution that always has this property is the trapezoidal method: since we use the average of end-points as the derivate, we must necessarily hit the exact solution exactly at the end of the step.
...
I'm not sure I completely follow your argument, but the trapezoidal method doesn't necessarily hit the exact solution. Particularly, if the step size is large enough, it can produce total nonsense, see the "instantaneous instability" discussion in my book. Also, if it produced the exact solution, there would have been no frequency axis warping in the BLT. Or do I completely miss your point?

Post

Z1202 wrote:
mystran wrote:...
Anyway, if we want predictable self-oscillation, then we have the inverse problem: we don't want an unstable system to become stable. This would happen if we under-estimate too much. So ideally we would want to never overshoot, but never under-estimate either.

It appears that the only (linear multi-step) solution that always has this property is the trapezoidal method: since we use the average of end-points as the derivate, we must necessarily hit the exact solution exactly at the end of the step.
...
I'm not sure I completely follow your argument, but the trapezoidal method doesn't necessarily hit the exact solution. Particularly, if the step size is large enough, it can produce total nonsense, see the "instantaneous instability" discussion in my book. Also, if it produced the exact solution, there would have been no frequency axis warping in the BLT. Or do I completely miss your point?
No, I think you're right. I have a "less than ideal" week and apparently it has effect on my math as well. Sorry about that. I have to check what you mean by the "instantaneous instability" discussion. I'll give it more though.

Post

Actually thanks for pointing out that section, I think there might be something important there, though I'm not quite sure what it is yet. I'm starting to think it's not safe to equate BLT with trapezoidal method, actually (not unconditionally anyway).

edit: I think your instantaneous instability is somewhat ill-conditioned though. If you force the analog system to have finite feedback slew-rates (no matter how fast) and you then take BLT of the system, the freq-warping will push the finite slew-rates to frequencies which we can represent and the problem goes away. Now consider that in real circuits you have finite slew-rates for everything and as soon as you include these as part of your model you are safe.

edit2: I'd claim it's yet another case of "garbage in, garbage out"

Post

Z1202 wrote:
mystran wrote:...
Anyway, if we want predictable self-oscillation, then we have the inverse problem: we don't want an unstable system to become stable. This would happen if we under-estimate too much. So ideally we would want to never overshoot, but never under-estimate either.

It appears that the only (linear multi-step) solution that always has this property is the trapezoidal method: since we use the average of end-points as the derivate, we must necessarily hit the exact solution exactly at the end of the step.
...
I'm not sure I completely follow your argument, but the trapezoidal method doesn't necessarily hit the exact solution.
Ok so this isn't right, but what I probably had in mind is that trapezoidal will always under-estimate decay, and being symmetric in time it will always over-estimate growth (ie under-estimate the decay of a time-reversed system when the time-reversed system decays). In other words, we have the only method that is A-stable in both forward and reverse time (which doesn't imply it's time-reversible because it's still not exactly).

Post

Some random incoheent thoughts:
Z1202 wrote:Also, if it produced the exact solution, there would have been no frequency axis warping in the BLT.
If "producing right solution" means that f*(k) = f(hk) (where f is exact contionus solution of given system and f* is numerical solution) then in some situations even if above eqaution holds for all k frequecy responses would not be the same (if you take simple y = a*x + b*y you can certainly fit a and b to mach exactly time response of system described with y'=k(x-y) but time response eould be different).
/* unrelated: I have a hunch/educated guess that if we have continous system with finite order rational transfer function H(s) it is not possible to construct finite order discerete system that has exactly same frequency response. Does anyone know about proof for this claim or is it not valid claim at all? */

...
mystran wrote: On the other hand, an implicit solution using derivate at y[n+1] is immune to the stability problem, because it will under-estimate the true solution. The more we under-estimate, the more the decay will "lag" (ie time-constant becomes longer than it should be).
time constant becomes smaller, that is we introduce overdamping compared to contionus system. That's why I proposed that in those situations that discretised version "missbehaves" L-stable solver (as oposed to strictly A-stable) might help to tame it down despite disadvantages of such solver compared to TR for example.

...

BTW, if "test equation" is y'=k*y (as used to describe concept of numeric solution stability) keep in mind that k is complex number to maintain generality for large class of LTIs ( I haven't followed your description throughout but I got impression that you asume monotonicity of response which is not the case in general).



And after reading again all this "stiffness" stuff I figured out that it's one really vague topic. Basically, part that I figured out and if I got it right, one example of stiff system would be one with response exp(-1000*t) + exp(-t). Fast decaying part would throw numerical solver of the track because we would require step much smaller than dictated by slower part that dominates response, and sifness is phenomenon that solution requires much smaller step then local smoothens of response would imply. But as I see it fast decaying component represents the pole high above dominant one. So either take step small enough (that is oversample in our lingo) or use solver that "maps" all stable poles in s domain in unit circle (and that would be TR).

My problem is that while we can came up with solver that works for LTI and works for nonlin-nonTI when we test it we dont have rigorous proof that it will work under all conditions for nonlin-nonLTI case. And for analysis of that situation something like this would be needed (above my head to be honest but someone migh find it usefull).

BTW mystran, you are right that we only care if something is LTI or not. Time variant sistems as oposed to pure nonlinear ones differ only in number of indepent inputs, theory (at least mindnumbing stuff that does exist) is generalised for arbitrar number of indepentent inputs and system states. (in fact, I personaly like to describe nonlinearity in ladder for example as "cutoff modulation by local variable", similar to notion of "gm modulation" you introduced. And kudos for great great idea to decouple that).

Post

urosh wrote: /* unrelated: I have a hunch/educated guess that if we have continous system with finite order rational transfer function H(s) it is not possible to construct finite order discerete system that has exactly same frequency response. Does anyone know about proof for this claim or is it not valid claim at all? */
You can have exactly the same frequency response up-to aliasing. That's impulse invariance transform. Should be straight-forward to prove that since sinc is infinite you can't have exactly the same frequency response as the band-limited version of the finite order ration transfer function H(s) except in degenerate special cases.
mystran wrote: On the other hand, an implicit solution using derivate at y[n+1] is immune to the stability problem, because it will under-estimate the true solution. The more we under-estimate, the more the decay will "lag" (ie time-constant becomes longer than it should be).
time constant becomes smaller, that is we introduce overdamping compared to contionus system. That's why I proposed that in those situations that discretised version "missbehaves" L-stable solver (as oposed to strictly A-stable) might help to tame it down despite disadvantages of such solver compared to TR for example.
Yeah sure, it's possible in some cases additional damping helps. Anyway the difference between A and L stable is that the latter is stable for |s|->inf. TR can't have this because then the time-reversed problem wouldn't be A-stable (and since it's symmetric, it wouldn't be A-stable in any direction, which wouldn't be L-stable since that implies A-stable).
BTW, if "test equation" is y'=k*y (as used to describe concept of numeric solution stability) keep in mind that k is complex number to maintain generality for large class of LTIs ( I haven't followed your description throughout but I got impression that you asume monotonicity of response which is not the case in general).
Yes, in general k is complex, but it happens that it doesn't really matter much. In fact it appears that most of the time it's the opposite and the most problematic cases are on the real-axis. But that whole discussion is very incoherent and apparently contains some serious factual mistakes (there's probably more).

The whole stiffness thing sucks as a concept. Honestly.
So either take step small enough (that is oversample in our lingo) or use solver that "maps" all stable poles in s domain in unit circle (and that would be TR).
Well, any A-stable method (ie anything that will damp the time-constants) works.
My problem is that while we can came up with solver that works for LTI and works for nonlin-nonTI when we test it we dont have rigorous proof that it will work under all conditions for nonlin-nonLTI case. And for analysis of that situation something like http://math.fau.edu/wang/pdf-file/journ ... matica.pdf would be needed (above my head to be honest but someone migh find it usefull).
I'll give it a look.

Post Reply

Return to “DSP and Plugin Development”