Impulse response formula for a digital IIR?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

I'm having trouble finding a resource for this.
Is there an explicit formula for the impulse response of a digital IIR? It has to include the frequency cramping near Nyquist.

Like this:
Image

Specifically for 2-poles and serial cascades thereof, if that matters.

Post

MrBeagleton wrote: Mon Dec 06, 2021 2:08 am I'm having trouble finding a resource for this.
Is there an explicit formula for the impulse response of a digital IIR? It has to include the frequency cramping near Nyquist.
What you need is the inverse z-transform of transfer function. Ordinarily if we transform the numerator and denominator separately as polynomials, we'll get the recurrence relation (aka. direct form representation) from which one can compute the impulse response numerically by simply evaluating the filter with a unit-impulse input... which I'd imagine is what people usually do when an explicit formula isn't absolutely necessary.

That said, the inverse z-transform (eg. see https://en.wikipedia.org/wiki/Z-transfo ... form_pairs) of some rational functions is also not that complicated and I feel like similar to inverse Laplace transform for the continuous case, doing partial fractional expansion on the transfer function should probably make the general case more straight-forward. Not much personal experience with this in the discrete domain though, so no idea if there's some nasty pitfalls with the trigonometry.

Post

for a single biquad, i did a little write-up some time ago:

http://www.rs-met.com/documents/dsp/Tim ... Design.pdf

for a series connection of more of them, i guess, things become more complicated. you would probably need the partial fraction expansion of the full transfer function. but i don't really know. maybe it can be computed more simply
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

mystran wrote: Mon Dec 06, 2021 5:20 am
MrBeagleton wrote: Mon Dec 06, 2021 2:08 am I'm having trouble finding a resource for this.
Is there an explicit formula for the impulse response of a digital IIR? It has to include the frequency cramping near Nyquist.
What you need is the inverse z-transform of transfer function.
Inverse z-transform. That's the term I couldn't find.

Now I do my research and figure out if I can live without the explicit formula. I imagine it gets messy fast for cascaded filters.
Thank you very much.
Music Engineer wrote: Mon Dec 06, 2021 11:12 am for a single biquad, i did a little write-up some time ago:

http://www.rs-met.com/documents/dsp/Tim ... Design.pdf

for a series connection of more of them, i guess, things become more complicated. you would probably need the partial fraction expansion of the full transfer function. but i don't really know. maybe it can be computed more simply
That's wonderful, thanks. I've actually used one of your other papers before:
http://www.rs-met.com/documents/dsp/Dig ... ements.pdf
Didn't realize you have a whole library of them. Awesome.

Post

MrBeagleton wrote: Sun Dec 12, 2021 12:12 am That's wonderful, thanks. I've actually used one of your other papers before:
http://www.rs-met.com/documents/dsp/Dig ... ements.pdf
Didn't realize you have a whole library of them. Awesome.
ahh...yes - that one. i think, i didn't even link to it from my website (and just posted a link here) because i was not yet entirely satisfied with my solution. martin vicanek came up with different solution to this problem:

https://www.vicanek.de/articles/BiquadFits.pdf

which may actually be better because it doesn't need the implicit newton solver. i didn't try this yet, though. but i certainly will, in case i need a general magnitude matching algorithm for my codebase. ...which may actually happen soon ...ahh...ok - i see: martin computes the poles from the impulse-invariant formula and then does a 3-point match whereas i do a full 5-point match. be careful though: in general, the matrix may become singular, if i remember correctly. ...although i didn't see this happen when the requirements were obtained from an analog filter. one needs to come up with some maliciously contrived requirements to break it - but breakable it is. but that's all long ago - i may misremember things
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

MrBeagleton wrote: Sun Dec 12, 2021 12:12 am Now I do my research and figure out if I can live without the explicit formula. I imagine it gets messy fast for cascaded filters.
yeah - i guess so, too. that's why i would first convert the cascade structure into a parallel structure. this is the point where you need the partial fraction expansion*. if you do this, then you can just use the given algo for each partial filter separately and add up the results. if i remember correctly, the PFE won't even lead to full biquads but rather to a bank of 2-pole-1-zero filters such that all this additional mumbo-jumbo with the unit-delay and the added unit-impulse won't be needed. i think, for many typical filters, you'll get a sum of damped sinusoids as impulse response (which smells a lot like modal synthesis). perhaps with an added FIR part.

(*) i also happen to have written up an algo for that:

https://github.com/RobinSchmidt/RS-MET/ ... ctions.txt

i did this because i found typical textbook methods algorithmically unattractive, i.e. inefficient - but maybe i just didn't look in the right places. it can also be done by solving a matrix equation or by the residue method - which seems to lead to an algo that is exponential in the pole multiplicity. my algorithm assumes, that the poles and their multiplicities are already known (but i think, other algos assume that as well). if they are not yet known, you would need to find them first - in the worst and most general case, by a polynomial root finder. but if it's a cascade of biquads, you can just simply compute them via the quadratic formula.
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

Analytic expressions for IRs (as well as respective PFEs) become ill-conditioned if some of the poles are close to each other. State-space form with matrix power (or matrix exponential in continuous time) shouldn't suffer from this, but then, instead of computing matrix power directly, one could compute it incrementally. But wait, isn't it pretty much the same as simply initializing the filter with a unit impulse and let the filter run, the latter being even computationally cheaper? So, if you just need to compute the impulse response (rather than do some analytical transformations with it), why not produce the IRs "experimentally"? Might be the best method, actually.

Post

Z1202 wrote: Sun Dec 12, 2021 8:45 amBut wait, isn't it pretty much the same as simply initializing the filter with a unit impulse and let the filter run, the latter being even computationally cheaper?
I could see analytical expressions being useful if you want to go the other way and actually design filters with some specific time-domain properties.. but other than that I agree, in discrete domain you can just compute it incrementally (where as in continuous domain being able to represent the IR as a continuous function has potentially more value).

Post

i fully agree. the only use case for these formulas, i have encountered so far, is the design of (modal) filters. to compute an IR (for plotting, say), i - of course - just record it directly from the filter

edit: ah - i just recall: for deriving the formulas for correctly initializing the filter states before the backward pass in bidirectional filtering, i needed such an explicit expression, too. we discussed this here recently.

now i'm a bit curious: what's your use case for such a formula?
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

i just found another use case which is actually pretty obvious: you can generalize the impulse invariant transform design method:

https://en.wikipedia.org/wiki/Impulse_invariance

to arbitrary filters. as it stands, the IIT method will probably only work for filters which have less poles than zeros, i.e. a strictly proper rational transfer function in the s-domain because only in this case the PFE here:

https://en.wikipedia.org/wiki/Impulse_i ... m_function

will be valid. for transfer functions, where the PFE is expression is valid, you can very simply obtain a discretized version in an impulse invariant manner. as the article says:

"Thus the poles from the continuous-time system function are translated to poles at z = e^(s_k T). The zeros, if any, are not so simply mapped."

you can do the same mapping to poles and zeros, in which case the method is called "matched z-transform" or "pole-zero mapping":

https://en.wikipedia.org/wiki/Matched_Z ... orm_method

but your result will not be impulse invariant anymore. in order to still be able to obtain an impulse invariant discretization, one would probably have to churn through the same algebra which i did for the digital case in my pdf for the analog prototype filter. this would presumably also end up giving some sum of damped continuous sinusoids with maybe some amount of the dirac-delta added in. and then one could equate the expressions for analog and digital impulse responses and from there obtain formulas for the digital coeffs in terms of the analog coeffs - for each partial fraction separately...i suppose (i've not yet worked out the details - maybe there are pitfalls - but it sounds like a plan to me).

btw.: is there actually such a thing as an analog filter with an "FIR part"? i can certainly formally write down an s-domain transfer function where the degree of the numerator is higher than that of the denominator - but all s-domain transfer functions, i have seen so far, always had numerator degree <= denominator degree. so, i guess, such improper rational transfer functions are somehow not realizable in the s-domain? ...hmmm...
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

Music Engineer wrote: Wed Dec 22, 2021 9:46 am but your result will not be impulse invariant anymore. in order to still be able to obtain an impulse invariant discretization, one would probably have to churn through the same algebra which i did for the digital case in my pdf for the analog prototype filter. this would presumably also end up giving some sum of damped continuous sinusoids with maybe some amount of the dirac-delta added in.
Impulse-invariant just means the digital IR matches the analog IR at the sampling points. Since addition of two impulse responses is linear, you can do PFE in continuous domain and then perform impulse-invariant transform on each parallel section separately... and if there are duplicate poles we're back to the original problem.

Matched-Z is not terribly useful transform "as-is" but it has two nice properties. It's trivial to do the transform if you know where the poles and zeroes are and the resulting error between the digital and analog filters in frequency domain is very easy to correct by fitting a corrector (ie. as far as I can tell empirically it seems to be the "optimal" choice in terms such corrector fitting).

As for continuous transfer functions with numerator order higher than the denominator... since "1/s" is the transform of a unit step (ie. integral of a Dirac delta) then "s" will be the delta-derivative which is "problematic" in practical terms... where as "z" in the discrete case is just a unit time-shift which is entirely unproblematic as long as you only need a finite amount of lookahead.

Post

The condition that the order of the numerator is less than the order of the denominator holds for continuous-time systems consisting of integrators and not having a direct path from input to the output. Having a direct path allows non-strictly proper transfer functions. This directly follows from (7.6) (p.242) of VAFilterDesign2.1.2 and is explained there.

If you use differentiators instead of integrators, this is equivalent to replacing s with s^-1 in the respective expressions, and thus in the transfer function. Such transfer functions may get the order of a numerator higher that the order of a denominator. The simplest example is a system consisting just of a differentiator, with the respective orders being 1 and 0.

Now notice that differentiator-based systems are rather weird. It starts with the impulse response of a system which has a direct path (equal numerator and denominator orders is TF) containing a Dirac delta. The impulse response of a differentiator gives a derivative of Dirac delta, etc. Some further issues are discussed in https://www.dafx.de/paper-archive/2021/ ... aper_3.pdf as well as in the accompanying video.

I have to admit that the TF question is probably not fully answered by this, as this leads to the question of why block diagrams of analog circuits are integrator-based and can there be any circuits which correspond to differentiator-based diagrams (now, for all I know, analog "differentiators" are not true differentiators, but some kind of high-pass filters and the respective block diagrams of their internal implementations are still integrator based). I'm not sure why that is, but it might be related to the causality aspects described in the article. E.g. one simple way of creating a "differentiator-based" analog circuit would be controlling the capacitor's voltage directly by the input signal and picking up the current as the output signal, but that's not really possible, you can only control the current.

Post

mystran wrote: Wed Dec 22, 2021 10:27 am As for continuous transfer functions with numerator order higher than the denominator... since "1/s" is the transform of a unit step (ie. integral of a Dirac delta) then "s" will be the delta-derivative which is "problematic" in practical terms... where as "z" in the discrete case is just a unit time-shift which is entirely unproblematic as long as you only need a finite amount of lookahead.
Even in discrete time you get larger numerator orders only in offline processing, as the latency arising from compensating the lookahead effectively grows the denominator order to at least a strictly nonproper TF.

It seems that any TF with numerator order being larger than the one of the denominator corresponds to a non-causal system to begin with, both in continuous- and discrete-time case.

Post

Actually I think you'd need differentiators in a direct path without feedback (or some kind of cross-cancellation of feedback effects) to have numerator order larger than denominator order. This setup (pure differentiation) is hardly practical, similarly to a pure-integration setup (both setups being non-BIBO).

Post

Z1202 wrote: Wed Dec 22, 2021 10:54 am Actually I think you'd need differentiators in a direct path without feedback (or some kind of cross-cancellation of feedback effects) to have numerator order larger than denominator order. This setup (pure differentiation) is hardly practical, similarly to a pure-integration setup (both setups being non-BIBO).
I'm not in the mood to try to do any rigorous math right now, but intuitively I think we might have the reason here why integrators rather than differentiators make sense in practice... you see it's actually not enough to have a theoretically BIBO-stable filter (in the narrow sense), because in practice you also need to keep your state-space bounded (and preferably bounded below your voltage rails)... and anyone who's ever played with non-linear resonant high-pass filters has probably noticed that if you feed them something like a raw saw-wave then there'll be a large spike where the waveform resets and at higher gain that spike clips and you'll start getting bleed-through at low-frequencies, sometimes (depending a bit on the filter topology and nature of the non-linearities) long before there is any drastic distortion in the low-pass output... so I think the issue is that trying to build filters out of differentiators you need a ton more headroom compared to filters built out of integrators.

Post Reply

Return to “DSP and Plugin Development”