ZDF with Inverse Laplace Transform

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Z1202 wrote:The main reason to use DF2/TDF2 is to have one state variable per integrator rather than two, while TDF2 is preferred because it has a more natural state (something like a half-delayed output IIRC) and can accommodate embedded saturation more easily (although the latter is more a hack than a model of what's really going on in analog circuits).
I've long advocated TDF2 over DF2 for floating point processing due to better floating point characteristics. Summed partial calculations are closer in magnitude, yielding better precision. Maybe it's not the only good reason.
My audio DSP blog: earlevel.com

Post

earlevel wrote:
Z1202 wrote:The main reason to use DF2/TDF2 is to have one state variable per integrator rather than two, while TDF2 is preferred because it has a more natural state (something like a half-delayed output IIRC) and can accommodate embedded saturation more easily (although the latter is more a hack than a model of what's really going on in analog circuits).
I've long advocated TDF2 over DF2 for floating point processing due to better floating point characteristics. Summed partial calculations are closer in magnitude, yielding better precision. Maybe it's not the only good reason.
Interesting. Would you care to reveal a bit more detail, because upon the first look I cannot identify the reasons for having significantly different precision there?

Post

Z1202 wrote:
earlevel wrote:I've long advocated TDF2 over DF2 for floating point processing due to better floating point characteristics. Summed partial calculations are closer in magnitude, yielding better precision. Maybe it's not the only good reason.
Interesting. Would you care to reveal a bit more detail, because upon the first look I cannot identify the reasons for having significantly different precision there?
I was afraid you might ask! :lol:

It might be about 20 years since I've looked at this, but it's based on an analysis of the difference in magnitude of values at the summing nodes. I remember I was tipped off to this by stumbling upon it in some lecture notes from a DSP class—I sent the instructor an email saying how much I appreciated that the info was accessible (back in a time when there wasn't so much available); he didn't reply but the link was no longer accessible. :wink:

Wikipedia, under "digital biquad filter", says, "Transposed direct forms—Each of the two direct forms may be transposed by reversing the flow graph without altering the transfer function. Branch points are changed to summers and summers are changed to branch points.[1] These provide modified implementations that accomplish the same transfer function which can be mathematically significant in a real-world implementation where precision may be lost in state storage." That's pretty vague, but it is pointing out that the transposed versions can be superior numerically.

Back in the day I was doing a lot of precision analysis via mathcad, mostly because I was doing a lot of fixed-point (56k) DSP. Not sure if I have anything to dig up...
My audio DSP blog: earlevel.com

Post

JOS states the advantage this way:

Numerical Robustness of TDF-II

An advantage of the transposed direct-form II structure (depicted in Fig.9.4) is that the zeros effectively precede the poles in series order. As mentioned above, in many digital filter designs, the poles by themselves give a large gain at some frequencies, and the zeros often provide compensating attenuation. This is especially true of filters with sharp transitions in their frequency response, such as the elliptic-function-filter example on page [*]; in such filters, the sharp transitions are achieved using near pole-zero cancellations close to the unit circle in the z plane.10.4

https://ccrma.stanford.edu/~jos/fp/Nume ... DF_II.html
My audio DSP blog: earlevel.com

Post

Yep, in other words:

With DF2 you have the all-pole part first and it may have a huge magnitude at certain frequencies (e.g. consider any filter with Fc -> 0). Thus the delay elements may be storing values that are orders of magnitude larger than the inputs/outputs, thus all the tiny bits of the signal are just lost inside the filter due to the FP normalization. The problem is not noticeable at 64-bit floats but it is very significant at 32 bits.

In general, one would say DF2 and DF1T are always out of consideration for 32-bit FP (such implementations are nothing but a mistake really).

Post

Thanks for the explanation, that was informative. However my first impression is that this might be more applicable to generic [T]DF1/2 filters, but not so much to BLT integrators. Those have one pole at z=1 and one zero at z=-1, thus no compensation is really occurring.

Post

I finally had time to read Mystran's suggested chapters from Vadim's book. It took several evenings to go through the texts and to digest their content properly. It was worth it. I intend to continue reading rest of the book.

So far I've derived the 1 pole and SVF filters (without non-linearities). I also used matt42's graph to derive my own "2 poles only" filter. All the mentioned filters seem to self-oscillate nicely with sweeping frequencies when a single impulse is fed into the filter. They also seem to behave rather nicely with high frequency modulation.

All this hype about trapezoidal integrators brings a question to my mind:
Has anyone tried using Simpson's integrators for ZDF filters?
Is there a reason no-one has even mentioned them?
One would imagine they would give even better behaving results...?

Feedback on Vadim's book:

Clearly the hardest parts to follow were the first 10 pages. Even though I had already read 2 books on DSP and understood them, these first 10 pages took lots of re-reading before I slowly realized that I already knew the information it was trying to convey. After that it quickly became easier and easier to understand. My suggestion is to rewrite and expand the first 10 pages so they're easier to follow. This way the audience for the book becomes magnitudes larger.

A few times when the book referred to some graph/formula, it had been introduced long ago in the book, tons of pages ago. Browsing the book so far away, trying to find the required graphs, is quite cumbersome. It might be a good idea to re-introduce some of those graphs/formulas again where they're referred to if more than X pages has gone since they were last seen.

Otherwise it has been an interesting book to read. Good job! :)

Post

Kraku wrote: All this hype about trapezoidal integrators brings a question to my mind:
Has anyone tried using Simpson's integrators for ZDF filters?
Is there a reason no-one has even mentioned them?
Not stable. The reason we use trapezoidal (most of the time) is that it's the most accurate integrator that transforms any stable continuous-time system into a stable discrete-time system (concept known as "A-stability") and in general it's also very predictable, because it just warps the frequencies without causing any phase problems that you'd have to worry about.

There are some possible alternatives (eg. backwards difference formulas) but in general you're not going to get anything more accurate without giving up some stability guarantees first.

ps. Strictly speaking you can't even implement Simpson's rule in a feedback system since you can't really access the function to be integrated at arbitrary points... but my point is, if you work around this problem, you'll either end up with something less accurate or something less stable than trapezoidal rule.

Post

Ah, that explains why Simpson's isn't mentioned anywhere. Thanks :)

Post

I noticed something today:

Linear version of the ladder filter, with 4x 1 pole LP filters in series: change Q and the peak stays in one place. (checked with bode plot)

Make a version of the same filter with only 2x 1 pole LP filters in series: change the Q and the peak moves? (checked with bode plot)

Am I doing something wrong or is there some mathemagic I'm missing here? What's going on? Did I botch the transfer function before bode plotting?

Post

Kraku wrote:I noticed something today:

Linear version of the ladder filter, with 4x 1 pole LP filters in series: change Q and the peak stays in one place. (checked with bode plot)

Make a version of the same filter with only 2x 1 pole LP filters in series: change the Q and the peak moves? (checked with bode plot)

Am I doing something wrong or is there some mathemagic I'm missing here? What's going on? Did I botch the transfer function before bode plotting?
Section 5.7 of the VAFilterDesign book begins with analysing exactly that case.

Regards,
{Z}

Post

Ah, I had reached a point a bit before that part in your book before I started experimenting with the filter structures. I'll read some more :)

Thanks!

Post

I continued my ZDF studies. I got my first non-linear filter working today. Here are some audio examples fresh out of the oven.

Transposed Sallen-Key ZDF linear version:
http://www.krakulandia.info/temp/zdf_tsk_linear.wav

Transposed Sallen-Key ZDF saturated version:
http://www.krakulandia.info/temp/zdf_tsk_non_linear.wav

No oversampling in these audio examples.
Exact same settings were used for both audio files (volume, range/frequency of sweeps, resonance amount, etc.).

Next stop is figuring out the most suitable approximation of Tanh() function...

EDIT:
Anyone use something like this for Tanh() ?

inline double Fast_Tanh(double x)
{
const double x2 = x*x;
return x / (1.0 + x2 / (3.0 + x2 / (5.0 + x2 / 7.0)));
}

Or a more precise approximation:

inline double Fast_Tanh(double x)
{
const double x2 = x*x;
return x / (1.0 + x2 / (3.0 + x2 / (5.0 + x2 / (7.0 + x2 / 9.0))));
}
Last edited by Kraku on Tue May 09, 2017 3:07 pm, edited 1 time in total.

Post

(double post)

Post

Just a little remark : how can a function with so many divide operations can be ever labelled as "fast" :D

I think you forgot a few steps of the method described there : https://varietyofsound.wordpress.com/20 ... -fraction/

Post Reply

Return to “DSP and Plugin Development”