Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?

soundmodel
 KVRian
 644 posts since 28 May, 2010 from Finland
Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
I want to understand
What mathematical features (e.g. monotonicity of coefficients) of filters are suited for realtime modulated filters?
How to analyze filters for the required features?
But I want to also understand things such as "filter instability" and the ways they are implemented in realtime (does one apply them in buffers or what?)?

There's another thread at dsp.SE:
https://dsp.stackexchange.com/q/49781/16003
I want to understand
What mathematical features (e.g. monotonicity of coefficients) of filters are suited for realtime modulated filters?
How to analyze filters for the required features?
But I want to also understand things such as "filter instability" and the ways they are implemented in realtime (does one apply them in buffers or what?)?

There's another thread at dsp.SE:
https://dsp.stackexchange.com/q/49781/16003

vortico
 KVRist
 180 posts since 19 Jul, 2008
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
In general you throw away all theory of stability when the filter coefficients are time dependent. There's very little research in this topic because the largest application is the very small field of DSP audio synthesis. If you're only interested in application, most people forget about digital filter design and look toward analog circuit modeling, taking into account saturation of analog components. Usually this method gives you mathematical stability, so you only need to consider numerical stability.
If you want to develop a theory of time dependent filters, construct a toy filter model and try to analytically compute its impulse response from an impulse at a given t. If you can prove that it converges to 0 for all t, the filter is stable. Add parameters to your model and try to generalize the result to determine which parameters give stability.
If you want to develop a theory of time dependent filters, construct a toy filter model and try to analytically compute its impulse response from an impulse at a given t. If you can prove that it converges to 0 for all t, the filter is stable. Add parameters to your model and try to generalize the result to determine which parameters give stability.
VCV Rack opensource virtual modular synthesizer

soundmodel
 KVRian
 644 posts since 28 May, 2010 from Finland
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
I'm confused about whether there's any diff. between mathematical stability and numerical stability.vortico wrote:In general you throw away all theory of stability when the filter coefficients are time dependent. There's very little research in this topic because the largest application is the very small field of DSP audio synthesis. If you're only interested in application, most people forget about digital filter design and look toward analog circuit modeling, taking into account saturation of analog components. Usually this method gives you mathematical stability, so you only need to consider numerical stability.
You mean prove that the impulse response function F(t) converges, when t > +() infty?vortico wrote: If you want to develop a theory of time dependent filters, construct a toy filter model and try to analytically compute its impulse response from an impulse at a given t. If you can prove that it converges to 0 for all t, the filter is stable. Add parameters to your model and try to generalize the result to determine which parameters give stability.
However, why does this mean stability? What does filter stability mean?
I've thought that it means that the filter doesn't "spike" randomly between parameter changes, but rather all parameter changes make "smooth" changes to the filter response.
Even if the system is bounded, that doesn't tell about, whether the system behaves "smoothly" between parameter changes.

matt42
 KVRian
 1057 posts since 9 Jan, 2006
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
Problems with parameter modulation in discreet filters usually arise from problems with the filter state. Take transposed form II filters, for example. They can be perfectly stable at any parameter setting (assuming corresponding linear analog filter is stable) and behave very well numerically. However modulating the parameters mangles the filters states  essentially the states will be only valid for previous parameter settings corresponding to the delay of the state. This is bad and will introduce artefacts and instability. This problem gets worse the higher the order of a single filter structure and the higher order delays of the filter states.
On the other end of the spectrum you have simple structures, such as trapezoidal integrators used in TPT, ZDF, circuit modelling, etc. These structures might only have a single filter state which is much more robust and probably more closely models analog topology of circuits known to behave well when modulated.
Regarding classic dsp vs circuit modelling  classical tends to transform the system as a whole, where as circuit modelling tends to transform individual components, down to the level of integrators, and wire them together, but both approaches are governed by the same underlying rules. Though the modelling approach is more likely to approximate the analog modulated behaviour
On the other end of the spectrum you have simple structures, such as trapezoidal integrators used in TPT, ZDF, circuit modelling, etc. These structures might only have a single filter state which is much more robust and probably more closely models analog topology of circuits known to behave well when modulated.
Regarding classic dsp vs circuit modelling  classical tends to transform the system as a whole, where as circuit modelling tends to transform individual components, down to the level of integrators, and wire them together, but both approaches are governed by the same underlying rules. Though the modelling approach is more likely to approximate the analog modulated behaviour

Z1202
 KVRian
 958 posts since 12 Apr, 2002
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
I'd say classical tends to transform just the transfer function, which is hardly a "system as a whole" as it encodes only the LTI part of the behavior and that's the root of the problem. Also direct forms tend to exhibit numerical issues, no idea why they are a part of every dsp textbook, or at least why usually no alternatives are presented. AFAIK, even if we stay away from the ZDF/TPT (although, why would we? ), there are other discretetime topologies featuring at least better numerical properties.matt42 wrote:Regarding classic dsp vs circuit modelling  classical tends to transform the system as a whole

mystran
 KVRAF
 4983 posts since 12 Feb, 2006 from Helsinki, Finland
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
Probably because 99.9% of the DSP market just couldn't care less when they design the filter once, evaluate it's performance in simulation, burn the coefficients in the ROM (or directly into silicon) and be done with it. If you can save a transistor or two by using less multipliers or less bits then all the better. So you have developers (and their bosses) asking for faster and cheaper filters and the research community and textbook authors trying to deliver.Z1202 wrote:Also direct forms tend to exhibit numerical issues, no idea why they are a part of every dsp textbook, or at least why usually no alternatives are presented.
So you end up with more research on how to optimise filters in the presence of coefficient truncation to 4 bits, or implemented completely without multipliers, or whatever else you have, rather than research on how to make this structure that satisfies some hippies with golden ears when they subject their filters to more abuse than most people in the conventional DSP community can even imagine.
We're the 0.1% of the DSP market that has specialised needs with regards to modulating filters on the fly and maintaining insane SNR over the whole Nyquist bandwidth with at the same time intentionally distorting the signals to death, even if it means throwing some extra CPU on the problem and you just don't see these kinds of requirements very much outside musicdsp... and even in our field some people just run away as soon as you tell them it's going to take a few (or many) extra clockcycles to do it properly.
So like.. I wouldn't blame the textbook authors too much... especially considering this stuff only got popular fairly recently even within this specialised field (and even then mostly in the actual industry, as it seems like even the academics specialising in musicdsp still mostly haven't got around to catching up).
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

vortico
 KVRist
 180 posts since 19 Jul, 2008
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
Yes. Mathematical stability involves the stability of the exact function itself. A system is stable if its solution f(t) for all initial conditions does not diverge. Numerical stability involves the stability of the solution as the initial condition is varied by a small variation (e.g. an infinitesimal). Does the variation blow up? This is useful to consider because floating point quantization introduces small errors upon every calculation. Can the system recover from these small errors, or does the solution lie on an unstable equilibrium?soundmodel wrote: I'm confused about whether there's any diff. between mathematical stability and numerical stability.
Yes, proving that the impulse response does not diverge for all time proves mathematical stability.soundmodel wrote: You mean prove that the impulse response function F(t) converges, when t > +() infty?
However, why does this mean stability? What does filter stability mean?
I've thought that it means that the filter doesn't "spike" randomly between parameter changes, but rather all parameter changes make "smooth" changes to the filter response.
Even if the system is bounded, that doesn't tell about, whether the system behaves "smoothly" between parameter changes.
An example you might be familiar with in linear filter design is that if all your poles lie within the complex unit circle, your filter is mathematical stable. But if you try iterating a 200 order FIR with all poles in the circle, your solution will blow up anyway, because you'd need far more than ~10 decimals of precision in order for the errors to be dampened enough to become numerically stable. This is why it's common to stack multiple biquads in your signal chain rather than design one highorder filter.
VCV Rack opensource virtual modular synthesizer

soundmodel
 KVRian
 644 posts since 28 May, 2010 from Finland
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
Yes, but if you read what I asked, then to me "does not diverge" doesn't guarantee artifactfree, smooth transitions between e.g. different cutoffs. This I believe can also be called "stability", but of different sorts.vortico wrote: Yes, proving that the impulse response does not diverge for all time proves mathematical stability.
In this thread, I'm particularly asking about the "smooth changes" kind of stability.

vortico
 KVRist
 180 posts since 19 Jul, 2008
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
That's not stability, you're looking for the ability to prove that the system has a continuous response to a parameter. That is much easier actually.
To clarify, you don't care about timedependent filters, correct? You only care about static filters parameterized by say, some cutoff f_c? For LTI filters this is easy. You have a set of filters with transfer function H_{f_c}(f). Your goal now is to prove pointwise continuity of f_c: For all values of f, the function H_{f_c}(f) is a onedimensional continuous function. For nonlinear filters, this is more difficult. What type of filters are you dealing with?
To clarify, you don't care about timedependent filters, correct? You only care about static filters parameterized by say, some cutoff f_c? For LTI filters this is easy. You have a set of filters with transfer function H_{f_c}(f). Your goal now is to prove pointwise continuity of f_c: For all values of f, the function H_{f_c}(f) is a onedimensional continuous function. For nonlinear filters, this is more difficult. What type of filters are you dealing with?
VCV Rack opensource virtual modular synthesizer

mystran
 KVRAF
 4983 posts since 12 Feb, 2006 from Helsinki, Finland
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
I'm not convinced I'm following this logic. I'm pretty sure what is actually desired is that when you change the parameters on the fly, there is no "ugly transients" in the output which turns out to be quite tricky to define mathematically. Any sort of LTI analysis is essentially useless, because this is really a property of the stateevolution under modulation.. but beyond that I've never been able to really put together what the exact condition would be.vortico wrote: To clarify, you don't care about timedependent filters, correct? You only care about static filters parameterized by say, some cutoff f_c? For LTI filters this is easy. You have a set of filters with transfer function H_{f_c}(f). Your goal now is to prove pointwise continuity of f_c: For all values of f, the function H_{f_c}(f) is a onedimensional continuous function. For nonlinear filters, this is more difficult. What type of filters are you dealing with?
In general anything built out of consistent numerical integrators (ie. ZDF filters) will usually behave "smooth" in this sense... but it's not actually necessary to use numerical integrators, since it's also possible to have other structures like normalised ladders that also behave just fine.
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

vortico
 KVRist
 180 posts since 19 Jul, 2008
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
Well, that would be a question of stability then. I think soundmodel's question is very illdefined, so I honestly have no idea what he's asking. Yes, if there is a time dependent filter parameter, then you can't use LTI analysis. But his statement "guarantee artifactfree, smooth transitions" leads me to believe that he doesn't care about the timedependent part, but only continuity with respect to a cutoff parameter. Knowing this information will give an approximate answer to the stability of the filter with sufficiently slowmoving modulation. Continuity is useful to show that a filter doesn't go *pop* when the user moves a knob from 999Hz to 1001Hz, which may be his question, but I'm not sure.
VCV Rack opensource virtual modular synthesizer

Z1202
 KVRian
 958 posts since 12 Apr, 2002
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
I don't think that continuity of transfer function can give any answer to this. To begin with, pretty much any transfer function is a continuous function of the cutoff. Secondly, transfer function doesn't contain any information about the system's behavior under modulation.vortico wrote:Continuity is useful to show that a filter doesn't go *pop* when the user moves a knob from 999Hz to 1001Hz, which may be his question, but I'm not sure.

mystran
 KVRAF
 4983 posts since 12 Feb, 2006 from Helsinki, Finland
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
Actually.. upon further thinking.. it's possible to build analog filters that don't behave nicely too (even if none of the common filters really have this problem) so I guess numerical integration on it's own doesn't really do much.
Basically if you predict the phase and amplitude (or phases and amplitudes in case it has several resonant modes) of the filter from inspecting the state variables, but without knowing the coefficients for the poles, then it's probably going to behave just fine. Generally when you get problems is where changing the coefficients changes the phase of the resonances... but honestly still don't know how to put that nicely into a mathematical condition.
Basically if you predict the phase and amplitude (or phases and amplitudes in case it has several resonant modes) of the filter from inspecting the state variables, but without knowing the coefficients for the poles, then it's probably going to behave just fine. Generally when you get problems is where changing the coefficients changes the phase of the resonances... but honestly still don't know how to put that nicely into a mathematical condition.
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

mystran
 KVRAF
 4983 posts since 12 Feb, 2006 from Helsinki, Finland
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
They are almost certainly asking for a condition for the commonly desired feature with musical filters that parameters can be "artistically adjusted on the fly" (ie. subjected to more or less arbitrary abuse) without ever getting any nasty clicks, pops or large transient variations in output amplitudes.vortico wrote:Yes, if there is a time dependent filter parameter, then you can't use LTI analysis. But his statement "guarantee artifactfree, smooth transitions" leads me to believe that he doesn't care about the timedependent part, but only continuity with respect to a cutoff parameter.
Most musically useful analog filters have this property and it's generally preserved by the "ZDF" techniques, where as most traditional digital filters just fail miserably... but it turns out that putting "musical behavior" into a nice mathematical condition in this case turns out to be slightly tricky.
edit: it should also be mentioned that most of the papers I've seen that touch this issue simply say something along the lines of "don't have transient artifacts" and then visually inspect the results over a change of parameters, without ever trying to formulate any kind of provable condition.. but you generally always want this property if you intend to change the coefficients in the middle of processing some audio
If you'd like Signaldust to return, please ask Katinka Tuisku to resign.

Z1202
 KVRian
 958 posts since 12 Apr, 2002
Re: Practical concerns when implementing “dynamically recalculable” filters in audio/musical equalizers?
If we are talking about the cutoff modulation, then, given that cutoff gains precede the integration, it's equivalent to warping the time axis, which is probably good enough to guarantee the smooth behavior.mystran wrote:Actually.. upon further thinking.. it's possible to build analog filters that don't behave nicely too (even if none of the common filters really have this problem) so I guess numerical integration on it's own doesn't really do much.
Edit: for modulating other parameters, just off the top of my head, one could try preserving the energy in the filter. I would try then defining the energy as a squared euclidian length of the state vector in the diagonalizing basis of the system. This still leaves an ambiguity of the basis vector lengths. In principle we could use real diagonal form, in which case conjugate basis vectors will be paired into real Jordan cells of 2nd order. I guess their lengths can be normalized in such a way that the energy contribution from each of such cell decays exponentially in the absence of input signal (edit: actually I think it will, with default Jordan normalization). The basis vector lengths between different Jordan cells can be matched against the decay speeds corresponding to the respective eigenvalues, so that the energy contributions from those decay according to the real parts of the respective poles. This is probably even consistent with the previous mentioning of the cutoff modulation. Not sure. Just random thoughts.
Edit2: this definition most likely has a problem if the poles are close to each other. Or maybe it doesn't...
Edit3: maybe we can't match the lengths of unrelated eigenvectors, since their energy contributions will decay exponentially with correct speed no matter what. At least that should work for a 2pole. For higher number of poles one could try considering a set of parallel 2poles (again, with a problem in case of close poles), or serial 2poles (in which case the order of connection might matter)