When to choose a ZDF

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Decades ago my main complaint with Chamberlin SVF was that LP, BP, and HP did not seem to agree on corner frequency, and mixing them together in digital didn't seem to work as slick as the same trick with analog SVF. I was more concerned with that problem than the limited tuning range. Many seemed to use only one SVF output at a time, where the issue might go unnoticed. Cheap wah-wah filters and such.
I have seen that model in Udo Zolzer's book. The only thing that justifies the unit delays would be the lack of a computer algebra package to generate equations for a more accurate model. Otherwise it's mysteriously wrong for no apparent reason.
~stratum~

Post

I think MZT does better for modulation than BLT, reason being less variance near nyquist, which under modulation would result in smaller "errors".

Post

stratum wrote:I have seen that model in Udo Zolzer's book. The only thing that justifies the unit delays would be the lack of a computer algebra package to generate equations for a more accurate model. Otherwise it's mysteriously wrong for no apparent reason.
Thanks Stratum. Drifting offtopic hopefully only one message-- IMO Hal Chamberlin is one of the "real engineers" along with RBJ and others we know and love. Found a copy of the book online, thought it long out of print. Thought I remembered something like 1982 but the pdf says 1985 first printing. It remains a fascinating book for many topics, including analog synth circuitry and such. http://cmc.music.columbia.edu/courses/g ... berlin.pdf

This was EARLY, when a music programmer's wet dream ultimate home computer was a PDP-11 minicomputer with a $200 home-made precision 16 bit DAC hand made of discrete parts. A friend was a business programmer and deadly bop alto sax player, he had got the book and didn't get much out of it, gave it to me. I loved it. Read numerous times. Not many computer music books were published back then and most didn't seem near as interesting or useful.

I doubt that much DSP had been formalized when the book was written. Maybe somebody had it all down pat as good as MyStran or Z1202 or Nigel Redmon even back then. Or maybe not. His digital filtering is chapter 14. The explanations seem "analog engineer trying to make the same thing work in digital". Maybe he was dumbing down the explanations and could have got lots more technical. Tis clear and interesting writing, but I suspect that was about how far they had got "making it up as they went along" at that time period. I don't know how much of that chapter is Hal regurgitating other references, versus what he had thunk up on his own. Its called a Chamberlin SVF. Maybe someone else first thunk up the digital version and he was just describing it, dunno.

The "analog-like" flowcharts look just like you see today (actually inherited from earlier analog filter techniques), but the explanations are less rigorous than one would see today. IIRC one place he describes the process of replacing an analog flowchart with code, something like "find a good place in the circuit to start then stay consistent so that inputs and feedbacks don't overlap and interfere with each other." :)

Post

JCJR wrote:.... but in my ignorant testing the first order ZDF highpass and allpass looked "better" and seemed more useful.
Mmm, a first order TPT filter is equal to an ordinal direct form (or any other topoogy) thing in terms of resulting magnitude/phase curves.
My guess is that you've compared just non-comparable things (most likely you compared the "Naive" aka "Euler transform?" LP, but this is not what we usually mean by canonical "non-TPT" LP (aka "old unit delay version", aka "one-pole-two-zero" LP) - which is also a BLT thing thus having identical frequency/phase response. Not counting that HP = IN - LP, AP = IN - 2*LP are also not what to be used with the Naive-LP simply because it's not quite an LP in a strict sense).

I can see how chapters 3.x can be confusing in this regard... The text is perfect but the diagrams might be slightly misleading (if one does not follow the text accurately and misses that the 3.6 thing is about integrators and not about LPFs as such): Fig. 3.6 shows the "naive-lp" response without cutoff-prewarping, then Fig. 3.13-3.14 show a cutoff-prewarped thus "better looking" curves for TPT/BLT near fs/2... and I think this is what gives a wrong impression of "TPT is better/'more-analog' near fs/2" (expressed by the topic starter first and now by JCJR).

JCJR
Speaking of the Naive-LP (which is, yet again, strictly speaking not quite an LP and it's not the same thing as an "old fashioned direct-form 1st-order LP") - the most fun thing is that if you compare its frequency response with a proper cutoff-prewarping (this is how it should be implemented if known to be used at higher fc) you'll find that it's in fact has more "continuous-like" magnitude curve than the BLT (incl. TPT-based) LPs.
(Though, if what you're looking for is a most-matching-near-fs/2 1-st order filter, there are better strategies and they can be implemented with TPT(-like) topologies as well).
Last edited by Max M. on Tue Nov 28, 2017 3:40 am, edited 1 time in total.

Post

One note regarding 1st order prewarped TPT vs prewarped naive.

TPT gives you a good first shot with predictable results. Then, of course, according to your specifics you can fine tune the result further, or not use TPT at all. Particularly, naive version, even with prewarping (unless I'm missing something here, in which case, please correct me) introduces a discrepancy between phase and magnitude responses. Depending on your needs and context, this can be perfectly acceptable (personally I'd check the differences against some oversampled TPT or highly oversampled naive implementation, esp. using signals with strong transients).

Since a beginner would most likely be lost in all these different options and in figuring out the tradeoffs and assessing their criticality, TPT is a very good starting point.

Post

Max M. wrote:I can see how chapters 3.x can be confusing in this regard... The text is perfect but the diagrams might be slightly misleading (if one does not follow the text accurately and misses that the 3.6 thing is about integrators and not about LPFs as such): Fig. 3.6 shows the "naive-lp" response without cutoff-prewarping, then Fig. 3.13-3.14 show a "better" curves for TPT/BLT near fs/2... and I think this is what gives a wrong impression of "TPT is better/'more-analog' near fs/2" (expressed by the topic starter first and now by JCJR).
Thanks for the good explanations Max. I probably wasn't precise enough-- I did not claim that the ZDF works more analog-like near nyquist. In fact specifically mentioned that the frequency warping is still in there and is not to be considered an advantage. I don't care greatly about the amplitude weirdness nearing nyquist. Sometimes that behavior can be useful.

Am generally ignorant but some of the schemes I saw for making high frequency amplitude response more "analog-like" seemed to mess up the expected phase response. In some cases I'd rather preserve a decent phase response if it is a tradeoff against amplitude response.

Maybe it has to do with as you say, some 1st order LP or HP not really being 1st order. Dunno. I've tried several [z - 1] type first-orders and especially the HP was near-useless, whereas Vadim's ZDF 1st order HP does as I'd expect. And the ZDF 1st order Allpass are well-behaved and as easy to understand as an analog 1st order Allpass. Over the years I collected several [z - 1] 1st order Allpass and didn't have much understanding of them.

Ferinstance with my pitiful math skills put together a spreadsheet with analog phase shifter equations and kludged together a "low frequency hilbert"-- When constructed out of cascaded ZDF allpass it actually worked as expected, right off the bat. Just batting at geese with a rake, so to speak. About as easy as sniffing solder fumes. :)

For curiosity, can you point me to code for [z - 1] type first-order HP and AP that have transfer function identical to Vadim's ZDFs? Am not arguing, as I don't know enough to argue about it.

Post

Z1202 wrote:Particularly, naive version, even with prewarping (unless I'm missing something here, in which case, please correct me) introduces a discrepancy between phase and magnitude responses.
Yes, you're absolutely right. A "better" magnitude matching 1st-order LP has to have it's phase at Fs/2 either at 0 or at 180. Thus it simply cannot be used in place of the TPT integrator to construct a higher-order filter. (And accepting or not wrong phase at fc for a first order filter as such is another big and difficult story I guess).

Post

JCJR wrote:Maybe it has to do with as you say, some 1st order LP or HP not really being 1st order. Dunno. I've tried several [z - 1] type first-orders and especially the HP was near-useless, whereas Vadim's ZDF 1st order HP does as I'd expect. And the ZDF 1st order Allpass are well-behaved and as easy to understand as an analog 1st order Allpass.
That's the thing with BLT. It kind of "exactly" preserves the amplitude and phase response, in the sense that only frequency axis is warped, but this warping can not affect the mixing behavior of different modes (or even different filters, given that they are prewarped identically). Therefore the mixing works out of the box, exactly as with the corresponding analog versions.

Post

JCJR wrote:For curiosity, can you point me to code for [z - 1] type first-order HP and AP that have transfer function identical to Vadim's ZDFs?
The BLT LP expresses in direct form (i.e. numerator/denominator, transfer-function form) as:
b = [1-k, 1-k]/2; a = [1, k];
Thus HP is:
b = [1+k, -1-k]/2; a = [1, k];
And AP is:
b = [-k, 1]; a = [1, k];

Where k = cos(w)/(1 + sin(w));
and w = 2*pi*Fc/Fs;

In practice they are usually implemented in opposite fashion: like first your get the AP and then LP = (IN + AP)/2; HP = (IN - AP)/2;

The Naive-LP which is:
b = [1-k, 0]; a = [1, k];
also can be used to get proper AP/HP outputs, but their IN/LP mixing coefficients will depend on k, thus its is a bit more complicated than the BLT +/- mixing formulas... (And in general rarely used to get AP/HP since the BLT-AP -> LP/HP method is more (or at least equally) efficient and numerically-stable.)

Post

JCJR wrote:
stratum wrote:I have seen that model in Udo Zolzer's book. The only thing that justifies the unit delays would be the lack of a computer algebra package to generate equations for a more accurate model. Otherwise it's mysteriously wrong for no apparent reason.
Thanks Stratum. Drifting offtopic hopefully only one message-- IMO Hal Chamberlin is one of the "real engineers" along with RBJ and others we know and love. Found a copy of the book online, thought it long out of print. Thought I remembered something like 1982 but the pdf says 1985 first printing. It remains a fascinating book for many topics, including analog synth circuitry and such. http://cmc.music.columbia.edu/courses/g ... berlin.pdf

This was EARLY, when a music programmer's wet dream ultimate home computer was a PDP-11 minicomputer with a $200 home-made precision 16 bit DAC hand made of discrete parts. A friend was a business programmer and deadly bop alto sax player, he had got the book and didn't get much out of it, gave it to me. I loved it. Read numerous times. Not many computer music books were published back then and most didn't seem near as interesting or useful.

I doubt that much DSP had been formalized when the book was written. Maybe somebody had it all down pat as good as MyStran or Z1202 or Nigel Redmon even back then. Or maybe not. His digital filtering is chapter 14. The explanations seem "analog engineer trying to make the same thing work in digital". Maybe he was dumbing down the explanations and could have got lots more technical. Tis clear and interesting writing, but I suspect that was about how far they had got "making it up as they went along" at that time period. I don't know how much of that chapter is Hal regurgitating other references, versus what he had thunk up on his own. Its called a Chamberlin SVF. Maybe someone else first thunk up the digital version and he was just describing it, dunno.

The "analog-like" flowcharts look just like you see today (actually inherited from earlier analog filter techniques), but the explanations are less rigorous than one would see today. IIRC one place he describes the process of replacing an analog flowchart with code, something like "find a good place in the circuit to start then stay consistent so that inputs and feedbacks don't overlap and interfere with each other." :)
Hi,

Thanks for posting the link.

As far as I can see, the model in Chamberlin's book (Fig 16.4, p.489) is correct whereas the one in Zolzer's book (Fig 2.5, p.36) has additional delay units. Something went wrong during the time in between. Perhaps it was commonly implemented incorrectly and somebody has redrawn the model from the code and published it.
~stratum~

Post

Z1202 wrote:
mystran wrote:
Z1202 wrote:Off the top of my head the general amount of error done by backward- or forward Euler (corresponding to rectangular method of integration) is the same as for trapezoidal integration.
The error term for trapezoidal is third order, where as it's second order for Euler (either), so trapezoidal actually converges faster as the step-size goes down... but as you note, having an accurate scheme is of little practical value unless it's also stable.
As I wrote above, for a single slice that might be correct, for integration over multiple slices it's pretty much equivalent, since each trapezoidal term contributes half of the rectangle on the left and half of the rectangle on the right. Of course, it can change if there is feedback, but so do the original error estimations ;)
If one thinks of trapezoidal as the average of two rectangles (rather than an actual trapezoidal), then the left-half rectangle would be explicit Euler (which underdamps) while the right-half rectangle would be implicit Euler (which overdamps). Since the errors are systematically in opposite directions, averaging the two cancels out the 2nd order term, so it's really more accurate than just a pair of rectangles. As far as I can tell this averaging to cancel out error terms is really the essence of RK methods, with trapezoidal one of the possible 2nd order implicit RK schemes.

Post

mystran wrote:If one thinks of trapezoidal as the average of two rectangles (rather than an actual trapezoidal), then the left-half rectangle would be explicit Euler (which underdamps) while the right-half rectangle would be implicit Euler (which overdamps). Since the errors are systematically in opposite directions, averaging the two cancels out the 2nd order term, so it's really more accurate than just a pair of rectangles. As far as I can tell this averaging to cancel out error terms is really the essence of RK methods, with trapezoidal one of the possible 2nd order implicit RK schemes.
Yeah, after reading your reply, I realized that my argumentation (relative error going to zero) applies only around DC, where all schemes agree pretty well.

Post

Max M. wrote:
JCJR wrote:For curiosity, can you point me to code for [z - 1] type first-order HP and AP that have transfer function identical to Vadim's ZDFs?
The BLT LP expresses in direct form (i.e. numerator/denominator, transfer-function form) as:
b = [1-k, 1-k]/2; a = [1, k];
Thus HP is:
b = [1+k, -1-k]/2; a = [1, k];
And AP is:
b = [-k, 1]; a = [1, k];

Where k = cos(w)/(1 + sin(w));
and w = 2*pi*Fc/Fs;

In practice they are usually implemented in opposite fashion: like first your get the AP and then LP = (IN + AP)/2; HP = (IN - AP)/2;

The Naive-LP which is:
b = [1-k, 0]; a = [1, k];
also can be used to get proper AP/HP outputs, but their IN/LP mixing coefficients will depend on k, thus its is a bit more complicated than the BLT +/- mixing formulas... (And in general rarely used to get AP/HP since the BLT-AP -> LP/HP method is more (or at least equally) efficient and numerically-stable.)
Thanks Max. Will try to spark two brain cells together sufficient to code a simple comparison test and upload the code/results.

Post


Btw.,
JCJR wrote:1982 ... I doubt that much DSP had been formalized when the book was written.

It had been actually. It's just that 80s/90s became a world of 16-bit fixed-point arithmetics and they had other major problems to fight (and the gap between theory and practice-trial-and-error-kludges-and-workarounds was quite... bdjueeeewwww).
In that regard, I wonder when we stop to compare TPT-topologies against biquads (the worst thing if you get out of 16-bit fixed-point quantization and internal node gain scaling) and recall (at least out of curiosity) some more convenient topologies (like various forms of Agarwal-Burrus (1974!), Kwan (1985), probably some others).
Letting aside any "topology-preserving" stuff (like WDF (1985) - already having its ticket here) and counting only for numerical-stability and time-varying properties - there still may be wheels to reinvent :) ).
Last edited by Max M. on Tue Nov 28, 2017 8:15 pm, edited 1 time in total.

Post

This page traces z-transform back to W. Hurewicz (1947) ( https://www.wikizero.com/en/Z-transform ) and the section beginning with 'Linear constant-coefficient difference equation' (continues to Transfer function and Zeros and poles) pretty much 'formalizes' DSP. The exact date for this section isn't given.

This is the so called 'traditional DSP view' that is in contrast with the 'circuit simulation view' examplified by ZDF/VA filters and it has nothing to do with unnecessary unit delays whose main utility appear to ease the obtaining of the equations that describe the relevant circuits.
~stratum~

Post Reply

Return to “DSP and Plugin Development”