Login / Register  0 items | $0.00 New What is KVR? Submit News Advertise
earlevel
KVRist
 
386 posts since 4 Apr, 2010

Postby earlevel; Mon Mar 20, 2017 10:01 am Re: ZDF with Inverse Laplace Transform

Z1202 wrote:The main reason to use DF2/TDF2 is to have one state variable per integrator rather than two, while TDF2 is preferred because it has a more natural state (something like a half-delayed output IIRC) and can accommodate embedded saturation more easily (although the latter is more a hack than a model of what's really going on in analog circuits).

I've long advocated TDF2 over DF2 for floating point processing due to better floating point characteristics. Summed partial calculations are closer in magnitude, yielding better precision. Maybe it's not the only good reason.
My audio DSP blog: earlevel.com
Z1202
KVRian
 
738 posts since 11 Apr, 2002

Postby Z1202; Tue Mar 21, 2017 8:46 am Re: ZDF with Inverse Laplace Transform

earlevel wrote:
Z1202 wrote:The main reason to use DF2/TDF2 is to have one state variable per integrator rather than two, while TDF2 is preferred because it has a more natural state (something like a half-delayed output IIRC) and can accommodate embedded saturation more easily (although the latter is more a hack than a model of what's really going on in analog circuits).

I've long advocated TDF2 over DF2 for floating point processing due to better floating point characteristics. Summed partial calculations are closer in magnitude, yielding better precision. Maybe it's not the only good reason.
Interesting. Would you care to reveal a bit more detail, because upon the first look I cannot identify the reasons for having significantly different precision there?
earlevel
KVRist
 
386 posts since 4 Apr, 2010

Postby earlevel; Tue Mar 21, 2017 10:12 am Re: ZDF with Inverse Laplace Transform

Z1202 wrote:
earlevel wrote:I've long advocated TDF2 over DF2 for floating point processing due to better floating point characteristics. Summed partial calculations are closer in magnitude, yielding better precision. Maybe it's not the only good reason.
Interesting. Would you care to reveal a bit more detail, because upon the first look I cannot identify the reasons for having significantly different precision there?

I was afraid you might ask! :lol:

It might be about 20 years since I've looked at this, but it's based on an analysis of the difference in magnitude of values at the summing nodes. I remember I was tipped off to this by stumbling upon it in some lecture notes from a DSP class—I sent the instructor an email saying how much I appreciated that the info was accessible (back in a time when there wasn't so much available); he didn't reply but the link was no longer accessible. :wink:

Wikipedia, under "digital biquad filter", says, "Transposed direct forms—Each of the two direct forms may be transposed by reversing the flow graph without altering the transfer function. Branch points are changed to summers and summers are changed to branch points.[1] These provide modified implementations that accomplish the same transfer function which can be mathematically significant in a real-world implementation where precision may be lost in state storage." That's pretty vague, but it is pointing out that the transposed versions can be superior numerically.

Back in the day I was doing a lot of precision analysis via mathcad, mostly because I was doing a lot of fixed-point (56k) DSP. Not sure if I have anything to dig up...
My audio DSP blog: earlevel.com
earlevel
KVRist
 
386 posts since 4 Apr, 2010

Postby earlevel; Tue Mar 21, 2017 10:17 am Re: ZDF with Inverse Laplace Transform

JOS states the advantage this way:

Numerical Robustness of TDF-II

An advantage of the transposed direct-form II structure (depicted in Fig.9.4) is that the zeros effectively precede the poles in series order. As mentioned above, in many digital filter designs, the poles by themselves give a large gain at some frequencies, and the zeros often provide compensating attenuation. This is especially true of filters with sharp transitions in their frequency response, such as the elliptic-function-filter example on page [*]; in such filters, the sharp transitions are achieved using near pole-zero cancellations close to the unit circle in the z plane.10.4

https://ccrma.stanford.edu/~jos/fp/Nume ... DF_II.html
My audio DSP blog: earlevel.com
Max M.
KVRist
 
164 posts since 20 Apr, 2005, from Moscow, Evil Empire

Postby Max M.; Tue Mar 21, 2017 10:39 am Re: ZDF with Inverse Laplace Transform

Yep, in other words:

With DF2 you have the all-pole part first and it may have a huge magnitude at certain frequencies (e.g. consider any filter with Fc -> 0). Thus the delay elements may be storing values that are orders of magnitude larger than the inputs/outputs, thus all the tiny bits of the signal are just lost inside the filter due to the FP normalization. The problem is not noticeable at 64-bit floats but it is very significant at 32 bits.

In general, one would say DF2 and DF1T are always out of consideration for 32-bit FP (such implementations are nothing but a mistake really).
Z1202
KVRian
 
738 posts since 11 Apr, 2002

Postby Z1202; Tue Mar 21, 2017 11:12 am Re: ZDF with Inverse Laplace Transform

Thanks for the explanation, that was informative. However my first impression is that this might be more applicable to generic [T]DF1/2 filters, but not so much to BLT integrators. Those have one pole at z=1 and one zero at z=-1, thus no compensation is really occurring.
Previous

Moderator: Moderators (Main)

Return to DSP and Plug-in Development