JCJR wrote:But because at least casually both kinds of filter seem to behave so similer, it comes full-circle back to, at least in the first-order case, is there a reason to choose one or the other?

Just think of them as "the same filter" done via the same (or very similar) arithmetic expressions just in different order: it's literally no more difference than for example in `a*(b + c) = a*b + a*c` or even in `2+2=2*2=3+1=4`.

There may be differences in the result only due to an imperfection of the calculating engine itself (i.e. fixed-point vs. floating-point, 16 vs. 32 vs. 64 bits etc., e.g. same `a*(b + c) = a*b + a*c` snippet may fail with a limited arithmetics depending on the actual values of abc).

Maybe one of the filters works better on certain kinds of transients?

No. As soon as we know their transfer functions are equal, we also know they have the same outputs for whatever kind of inputs (yet again not counting ALU precision/limits).

Or maybe one filter is better-behaved when swept?

Yes, they behave differently.

In simple words: since the difference in topology is (roughly) the difference in the order of arithmetic operations, it's expected that when we change some coefficients there, while [1/z] unit value remains unchanged, the result will be different depending on where exactly that [1/z] value is used within these arithmetic operations. And then since this (always wrong) result is also fed back one way or another, we can get all sort of weirdness including self-oscillation depending on how big the error is (though for a first-order filters it's usually not that dramatical as for second-or-more-order stuff).

In general, when considering a time-varying stuff, it's important to concentrate on "what" topology we get instead of "how" ("bilinear", "trapezoidal", "whatever") we came to it.

Or, in other words, when it comes to practical implementations (of time-varying filters), what really matters is the understanding of the signal flow in the particular topology, while the transform or whatever strategy used to get the topology is absolutely irrelevant. Though, adopting/optimizing a topology for time-varying stuff (and limited-precision arithmetics) is an art on its own and it still waits for a dedicated book to be written yet (there're a lot of paper with a lot of math in though

, so to get back on topic:

Some TPT-based filters (but not "whatever" TPT-based filters) are usually not a bad choice for time-varying case, simply because they already include many (if not most) of the tricks of the said "time varying art". Not a surprise, since the time-varying behavior is taken into account here and there (within the VA book fo example) when making this or that decision there (just follow the text of the corresponding chapters).

In contrast, the direct form topology (you used for allpass above) is known to be quite bad for time-varying case (there're better alternatives for realizing a time-varying allpass filters).

---

In summary, and as a bonus remark for time-varying and single precision fp cases, I think it's also important to remind that the primary story of the TPT-thing as such is (roughly) about simulating electronic circuits (and especially non-linear circuits where a "preserved topology" is crucial).

And when we don't need a non-linearity inside our filter (or a non-linear filter as a whole is not supposed to mimic any circuit at all), a TPT-thing is not necessary the best choice.

Though I won't argue to:

mystran wrote:... you should always choose ZDF unless saving that one extra instruction ...

(especially when it comes to second-order filters) because this will immediately obligate me to point to some papers/examples, and I'm not ready for this kind discussion right now. (But someday I will argue

The exact number instructions used in the end may not matter at all, but the very fact that we normally don't need any legacy circuits to construct a proper discrete filter may.)