Impulse response formula for a digital IIR?

MrBeagleton
 KVRer
 16 posts since 5 Nov, 2015
I'm having trouble finding a resource for this.
Is there an explicit formula for the impulse response of a digital IIR? It has to include the frequency cramping near Nyquist.
Like this:
Specifically for 2poles and serial cascades thereof, if that matters.
Is there an explicit formula for the impulse response of a digital IIR? It has to include the frequency cramping near Nyquist.
Like this:
Specifically for 2poles and serial cascades thereof, if that matters.

mystran
 KVRAF
 6757 posts since 12 Feb, 2006 from Helsinki, Finland
What you need is the inverse ztransform of transfer function. Ordinarily if we transform the numerator and denominator separately as polynomials, we'll get the recurrence relation (aka. direct form representation) from which one can compute the impulse response numerically by simply evaluating the filter with a unitimpulse input... which I'd imagine is what people usually do when an explicit formula isn't absolutely necessary.MrBeagleton wrote: ↑Sun Dec 05, 2021 6:08 pmI'm having trouble finding a resource for this.
Is there an explicit formula for the impulse response of a digital IIR? It has to include the frequency cramping near Nyquist.
That said, the inverse ztransform (eg. see https://en.wikipedia.org/wiki/Ztransfo ... form_pairs) of some rational functions is also not that complicated and I feel like similar to inverse Laplace transform for the continuous case, doing partial fractional expansion on the transfer function should probably make the general case more straightforward. Not much personal experience with this in the discrete domain though, so no idea if there's some nasty pitfalls with the trigonometry.
Preferred pronouns would be "it/it" because according to this country, I'm a piece of human trash.

Music Engineer
 KVRAF
 4017 posts since 8 Mar, 2004 from Berlin, Germany
for a single biquad, i did a little writeup some time ago:
http://www.rsmet.com/documents/dsp/Tim ... Design.pdf
for a series connection of more of them, i guess, things become more complicated. you would probably need the partial fraction expansion of the full transfer function. but i don't really know. maybe it can be computed more simply
http://www.rsmet.com/documents/dsp/Tim ... Design.pdf
for a series connection of more of them, i guess, things become more complicated. you would probably need the partial fraction expansion of the full transfer function. but i don't really know. maybe it can be computed more simply

MrBeagleton
 KVRer
 16 posts since 5 Nov, 2015
Topic Starter
Inverse ztransform. That's the term I couldn't find.mystran wrote: ↑Sun Dec 05, 2021 9:20 pmWhat you need is the inverse ztransform of transfer function.MrBeagleton wrote: ↑Sun Dec 05, 2021 6:08 pmI'm having trouble finding a resource for this.
Is there an explicit formula for the impulse response of a digital IIR? It has to include the frequency cramping near Nyquist.
Now I do my research and figure out if I can live without the explicit formula. I imagine it gets messy fast for cascaded filters.
Thank you very much.
That's wonderful, thanks. I've actually used one of your other papers before:Music Engineer wrote: ↑Mon Dec 06, 2021 3:12 amfor a single biquad, i did a little writeup some time ago:
http://www.rsmet.com/documents/dsp/Tim ... Design.pdf
for a series connection of more of them, i guess, things become more complicated. you would probably need the partial fraction expansion of the full transfer function. but i don't really know. maybe it can be computed more simply
http://www.rsmet.com/documents/dsp/Dig ... ements.pdf
Didn't realize you have a whole library of them. Awesome.

Music Engineer
 KVRAF
 4017 posts since 8 Mar, 2004 from Berlin, Germany
ahh...yes  that one. i think, i didn't even link to it from my website (and just posted a link here) because i was not yet entirely satisfied with my solution. martin vicanek came up with different solution to this problem:MrBeagleton wrote: ↑Sat Dec 11, 2021 4:12 pmThat's wonderful, thanks. I've actually used one of your other papers before:
http://www.rsmet.com/documents/dsp/Dig ... ements.pdf
Didn't realize you have a whole library of them. Awesome.
https://www.vicanek.de/articles/BiquadFits.pdf
which may actually be better because it doesn't need the implicit newton solver. i didn't try this yet, though. but i certainly will, in case i need a general magnitude matching algorithm for my codebase. ...which may actually happen soon ...ahh...ok  i see: martin computes the poles from the impulseinvariant formula and then does a 3point match whereas i do a full 5point match. be careful though: in general, the matrix may become singular, if i remember correctly. ...although i didn't see this happen when the requirements were obtained from an analog filter. one needs to come up with some maliciously contrived requirements to break it  but breakable it is. but that's all long ago  i may misremember things

Music Engineer
 KVRAF
 4017 posts since 8 Mar, 2004 from Berlin, Germany
yeah  i guess so, too. that's why i would first convert the cascade structure into a parallel structure. this is the point where you need the partial fraction expansion*. if you do this, then you can just use the given algo for each partial filter separately and add up the results. if i remember correctly, the PFE won't even lead to full biquads but rather to a bank of 2pole1zero filters such that all this additional mumbojumbo with the unitdelay and the added unitimpulse won't be needed. i think, for many typical filters, you'll get a sum of damped sinusoids as impulse response (which smells a lot like modal synthesis). perhaps with an added FIR part.MrBeagleton wrote: ↑Sat Dec 11, 2021 4:12 pmNow I do my research and figure out if I can live without the explicit formula. I imagine it gets messy fast for cascaded filters.
(*) i also happen to have written up an algo for that:
https://github.com/RobinSchmidt/RSMET/ ... ctions.txt
i did this because i found typical textbook methods algorithmically unattractive, i.e. inefficient  but maybe i just didn't look in the right places. it can also be done by solving a matrix equation or by the residue method  which seems to lead to an algo that is exponential in the pole multiplicity. my algorithm assumes, that the poles and their multiplicities are already known (but i think, other algos assume that as well). if they are not yet known, you would need to find them first  in the worst and most general case, by a polynomial root finder. but if it's a cascade of biquads, you can just simply compute them via the quadratic formula.

Z1202
 KVRian
 1383 posts since 12 Apr, 2002
Analytic expressions for IRs (as well as respective PFEs) become illconditioned if some of the poles are close to each other. Statespace form with matrix power (or matrix exponential in continuous time) shouldn't suffer from this, but then, instead of computing matrix power directly, one could compute it incrementally. But wait, isn't it pretty much the same as simply initializing the filter with a unit impulse and let the filter run, the latter being even computationally cheaper? So, if you just need to compute the impulse response (rather than do some analytical transformations with it), why not produce the IRs "experimentally"? Might be the best method, actually.

mystran
 KVRAF
 6757 posts since 12 Feb, 2006 from Helsinki, Finland
I could see analytical expressions being useful if you want to go the other way and actually design filters with some specific timedomain properties.. but other than that I agree, in discrete domain you can just compute it incrementally (where as in continuous domain being able to represent the IR as a continuous function has potentially more value).
Preferred pronouns would be "it/it" because according to this country, I'm a piece of human trash.

Music Engineer
 KVRAF
 4017 posts since 8 Mar, 2004 from Berlin, Germany
i fully agree. the only use case for these formulas, i have encountered so far, is the design of (modal) filters. to compute an IR (for plotting, say), i  of course  just record it directly from the filter
edit: ah  i just recall: for deriving the formulas for correctly initializing the filter states before the backward pass in bidirectional filtering, i needed such an explicit expression, too. we discussed this here recently.
now i'm a bit curious: what's your use case for such a formula?
edit: ah  i just recall: for deriving the formulas for correctly initializing the filter states before the backward pass in bidirectional filtering, i needed such an explicit expression, too. we discussed this here recently.
now i'm a bit curious: what's your use case for such a formula?

Music Engineer
 KVRAF
 4017 posts since 8 Mar, 2004 from Berlin, Germany
i just found another use case which is actually pretty obvious: you can generalize the impulse invariant transform design method:
https://en.wikipedia.org/wiki/Impulse_invariance
to arbitrary filters. as it stands, the IIT method will probably only work for filters which have less poles than zeros, i.e. a strictly proper rational transfer function in the sdomain because only in this case the PFE here:
https://en.wikipedia.org/wiki/Impulse_i ... m_function
will be valid. for transfer functions, where the PFE is expression is valid, you can very simply obtain a discretized version in an impulse invariant manner. as the article says:
"Thus the poles from the continuoustime system function are translated to poles at z = e^(s_k T). The zeros, if any, are not so simply mapped."
you can do the same mapping to poles and zeros, in which case the method is called "matched ztransform" or "polezero mapping":
https://en.wikipedia.org/wiki/Matched_Z ... orm_method
but your result will not be impulse invariant anymore. in order to still be able to obtain an impulse invariant discretization, one would probably have to churn through the same algebra which i did for the digital case in my pdf for the analog prototype filter. this would presumably also end up giving some sum of damped continuous sinusoids with maybe some amount of the diracdelta added in. and then one could equate the expressions for analog and digital impulse responses and from there obtain formulas for the digital coeffs in terms of the analog coeffs  for each partial fraction separately...i suppose (i've not yet worked out the details  maybe there are pitfalls  but it sounds like a plan to me).
btw.: is there actually such a thing as an analog filter with an "FIR part"? i can certainly formally write down an sdomain transfer function where the degree of the numerator is higher than that of the denominator  but all sdomain transfer functions, i have seen so far, always had numerator degree <= denominator degree. so, i guess, such improper rational transfer functions are somehow not realizable in the sdomain? ...hmmm...
https://en.wikipedia.org/wiki/Impulse_invariance
to arbitrary filters. as it stands, the IIT method will probably only work for filters which have less poles than zeros, i.e. a strictly proper rational transfer function in the sdomain because only in this case the PFE here:
https://en.wikipedia.org/wiki/Impulse_i ... m_function
will be valid. for transfer functions, where the PFE is expression is valid, you can very simply obtain a discretized version in an impulse invariant manner. as the article says:
"Thus the poles from the continuoustime system function are translated to poles at z = e^(s_k T). The zeros, if any, are not so simply mapped."
you can do the same mapping to poles and zeros, in which case the method is called "matched ztransform" or "polezero mapping":
https://en.wikipedia.org/wiki/Matched_Z ... orm_method
but your result will not be impulse invariant anymore. in order to still be able to obtain an impulse invariant discretization, one would probably have to churn through the same algebra which i did for the digital case in my pdf for the analog prototype filter. this would presumably also end up giving some sum of damped continuous sinusoids with maybe some amount of the diracdelta added in. and then one could equate the expressions for analog and digital impulse responses and from there obtain formulas for the digital coeffs in terms of the analog coeffs  for each partial fraction separately...i suppose (i've not yet worked out the details  maybe there are pitfalls  but it sounds like a plan to me).
btw.: is there actually such a thing as an analog filter with an "FIR part"? i can certainly formally write down an sdomain transfer function where the degree of the numerator is higher than that of the denominator  but all sdomain transfer functions, i have seen so far, always had numerator degree <= denominator degree. so, i guess, such improper rational transfer functions are somehow not realizable in the sdomain? ...hmmm...

mystran
 KVRAF
 6757 posts since 12 Feb, 2006 from Helsinki, Finland
Impulseinvariant just means the digital IR matches the analog IR at the sampling points. Since addition of two impulse responses is linear, you can do PFE in continuous domain and then perform impulseinvariant transform on each parallel section separately... and if there are duplicate poles we're back to the original problem.Music Engineer wrote: ↑Wed Dec 22, 2021 1:46 ambut your result will not be impulse invariant anymore. in order to still be able to obtain an impulse invariant discretization, one would probably have to churn through the same algebra which i did for the digital case in my pdf for the analog prototype filter. this would presumably also end up giving some sum of damped continuous sinusoids with maybe some amount of the diracdelta added in.
MatchedZ is not terribly useful transform "asis" but it has two nice properties. It's trivial to do the transform if you know where the poles and zeroes are and the resulting error between the digital and analog filters in frequency domain is very easy to correct by fitting a corrector (ie. as far as I can tell empirically it seems to be the "optimal" choice in terms such corrector fitting).
As for continuous transfer functions with numerator order higher than the denominator... since "1/s" is the transform of a unit step (ie. integral of a Dirac delta) then "s" will be the deltaderivative which is "problematic" in practical terms... where as "z" in the discrete case is just a unit timeshift which is entirely unproblematic as long as you only need a finite amount of lookahead.
Preferred pronouns would be "it/it" because according to this country, I'm a piece of human trash.

Z1202
 KVRian
 1383 posts since 12 Apr, 2002
The condition that the order of the numerator is less than the order of the denominator holds for continuoustime systems consisting of integrators and not having a direct path from input to the output. Having a direct path allows nonstrictly proper transfer functions. This directly follows from (7.6) (p.242) of VAFilterDesign2.1.2 and is explained there.
If you use differentiators instead of integrators, this is equivalent to replacing s with s^1 in the respective expressions, and thus in the transfer function. Such transfer functions may get the order of a numerator higher that the order of a denominator. The simplest example is a system consisting just of a differentiator, with the respective orders being 1 and 0.
Now notice that differentiatorbased systems are rather weird. It starts with the impulse response of a system which has a direct path (equal numerator and denominator orders is TF) containing a Dirac delta. The impulse response of a differentiator gives a derivative of Dirac delta, etc. Some further issues are discussed in https://www.dafx.de/paperarchive/2021/ ... aper_3.pdf as well as in the accompanying video.
I have to admit that the TF question is probably not fully answered by this, as this leads to the question of why block diagrams of analog circuits are integratorbased and can there be any circuits which correspond to differentiatorbased diagrams (now, for all I know, analog "differentiators" are not true differentiators, but some kind of highpass filters and the respective block diagrams of their internal implementations are still integrator based). I'm not sure why that is, but it might be related to the causality aspects described in the article. E.g. one simple way of creating a "differentiatorbased" analog circuit would be controlling the capacitor's voltage directly by the input signal and picking up the current as the output signal, but that's not really possible, you can only control the current.
If you use differentiators instead of integrators, this is equivalent to replacing s with s^1 in the respective expressions, and thus in the transfer function. Such transfer functions may get the order of a numerator higher that the order of a denominator. The simplest example is a system consisting just of a differentiator, with the respective orders being 1 and 0.
Now notice that differentiatorbased systems are rather weird. It starts with the impulse response of a system which has a direct path (equal numerator and denominator orders is TF) containing a Dirac delta. The impulse response of a differentiator gives a derivative of Dirac delta, etc. Some further issues are discussed in https://www.dafx.de/paperarchive/2021/ ... aper_3.pdf as well as in the accompanying video.
I have to admit that the TF question is probably not fully answered by this, as this leads to the question of why block diagrams of analog circuits are integratorbased and can there be any circuits which correspond to differentiatorbased diagrams (now, for all I know, analog "differentiators" are not true differentiators, but some kind of highpass filters and the respective block diagrams of their internal implementations are still integrator based). I'm not sure why that is, but it might be related to the causality aspects described in the article. E.g. one simple way of creating a "differentiatorbased" analog circuit would be controlling the capacitor's voltage directly by the input signal and picking up the current as the output signal, but that's not really possible, you can only control the current.

Z1202
 KVRian
 1383 posts since 12 Apr, 2002
Even in discrete time you get larger numerator orders only in offline processing, as the latency arising from compensating the lookahead effectively grows the denominator order to at least a strictly nonproper TF.mystran wrote: ↑Wed Dec 22, 2021 2:27 amAs for continuous transfer functions with numerator order higher than the denominator... since "1/s" is the transform of a unit step (ie. integral of a Dirac delta) then "s" will be the deltaderivative which is "problematic" in practical terms... where as "z" in the discrete case is just a unit timeshift which is entirely unproblematic as long as you only need a finite amount of lookahead.
It seems that any TF with numerator order being larger than the one of the denominator corresponds to a noncausal system to begin with, both in continuous and discretetime case.

Z1202
 KVRian
 1383 posts since 12 Apr, 2002
Actually I think you'd need differentiators in a direct path without feedback (or some kind of crosscancellation of feedback effects) to have numerator order larger than denominator order. This setup (pure differentiation) is hardly practical, similarly to a pureintegration setup (both setups being nonBIBO).

mystran
 KVRAF
 6757 posts since 12 Feb, 2006 from Helsinki, Finland
I'm not in the mood to try to do any rigorous math right now, but intuitively I think we might have the reason here why integrators rather than differentiators make sense in practice... you see it's actually not enough to have a theoretically BIBOstable filter (in the narrow sense), because in practice you also need to keep your statespace bounded (and preferably bounded below your voltage rails)... and anyone who's ever played with nonlinear resonant highpass filters has probably noticed that if you feed them something like a raw sawwave then there'll be a large spike where the waveform resets and at higher gain that spike clips and you'll start getting bleedthrough at lowfrequencies, sometimes (depending a bit on the filter topology and nature of the nonlinearities) long before there is any drastic distortion in the lowpass output... so I think the issue is that trying to build filters out of differentiators you need a ton more headroom compared to filters built out of integrators.Z1202 wrote: ↑Wed Dec 22, 2021 2:54 amActually I think you'd need differentiators in a direct path without feedback (or some kind of crosscancellation of feedback effects) to have numerator order larger than denominator order. This setup (pure differentiation) is hardly practical, similarly to a pureintegration setup (both setups being nonBIBO).
Preferred pronouns would be "it/it" because according to this country, I'm a piece of human trash.