# automatic discretization of filter block diagrams

DSP, Plug-in and Host development discussion.
RELATED
PRODUCTS
tayholliday
KVRist
Topic Starter
78 posts since 27 Mar, 2012
I was reading Urs Heckmann's article about ZDF (https://urs.silvrback.com/zero-delay-feedback), as well as The Art of VA Filter Design.

I'm wondering what techniques are out there for automatically discretizing and executing continuous-time filter block diagrams?

More formally speaking, I'm looking for an algorithm that does the following:

INPUT: A continuous-time ("analog") block diagram (summators, integrators, nonlinear elements, etc.)
OUTPUT: DSP code to execute that diagram, with good numerical properties

Typically, it seems the process is at least somewhat manual.

thanks! mystran
KVRAF
7053 posts since 12 Feb, 2006 from Helsinki, Finland
The magic trick that makes all this relatively simple (in theory anyway) is to express the simultaneous equations in matrix form. It's fairly easy to build such a matrix automatically from a directed graph as basically you just have rows as sources, columns as targets and elements are scaled connections between those. Local feedback goes on the diagonal. Integrators have inputs and outputs and the integration (or at minimum the actual delay that is required to integrate numerically) is updated when we move from one sample to another. Newton non-linearities similarly have inputs and outputs, but update both their "input value" and their diagonal entry in the matrix every iteration (strictly speaking, you don't need to treat non-linearities as separate nodes, but it greatly simplifies code-generation and optimisations).

Once you've put it all into matrix form, you just need some generic algorithm (eg. LU or whatever) for solving x from A*x=y, then you update A and y with new values from Newton, iterate the solution until converge and finally update your integrators. This is basically how circuit simulators do it as well; the only difference is that they build the matrix A in a slightly different way. Once you've got this far, instead of solving the system directly, you just modify the code to output a "script of steps" in some format (eg. C code or whatever) that you can turn into hopefully more efficient machine code.

That's obviously a very high-level (and probably overly confusing) overview of something that can be quite complicated in practice, but that's the basic idea. There's nothing very special here in terms of the algorithms themselves, it's just really a matter of putting the pieces together.

edit: just to clarify, I use the terms "input" and "output" above from the point of view of the signal flow network; this networks "inputs" values from and "outputs" values to the integrators and non-linearities (so from the point of these blocks themselves, the terms are sort of reserved)

Also as far as summing nodes go, you can avoid adding these as dimensions in the matrix by "tracing through them" until you hit a more useful block (or you can alternatively give them dimensions and then optimise those out with a partial LU pre-pass; whatever you find the most convenient).
Preferred pronouns would be "it/it" because according to this country, I'm a piece of human trash.

tayholliday
KVRist
Topic Starter
78 posts since 27 Mar, 2012
Thanks for your reply! It seems you're describing separating the linear and nonlinear elements such that you can do more localized Newton iteration, rather than a big multidimensional Newton, which would require estimating and inverting a Jacobian for each iteration. Is that right?

Do you have some references that might explain further? mystran
KVRAF
7053 posts since 12 Feb, 2006 from Helsinki, Finland
tayholliday wrote: Sat Dec 22, 2018 6:51 am Thanks for your reply! It seems you're describing separating the linear and nonlinear elements such that you can do more localized Newton iteration, rather than a big multidimensional Newton, which would require estimating and inverting a Jacobian for each iteration. Is that right?
For Newton you pretty much always need to solve A*x=y again for each iteration (eg. A is your "Jacobian"). Note that you should never actually never compute the inverse (even though that's how it's usually presented in textbooks), because it's always more efficient and more numerically stable to just solve the equation directly. While Newton generally "works" even if you use some approximations here, it generally hurts convergence a lot, leading to more iterations and generally running a lot slower for systems of the size (and type) you would consider for real-time DSP.

However, something you can do is put all your non-linear stuff into a block-matrix at the bottom-right corner, then do a block-LU pass (or preferably just run normal LU until you hit that block, which is the same as far as the bottom-right block is concerned). The bottom right block then is a linear sum of the original values (which you can still adjust by adding deltas) and the values that LU factored into it (which are constant for each iteration), so you can store this partially factored form and update it directly, which avoids starting each iteration from scratch. Similarly you don't need to continue back-substitute past the non-linear block until you've reached convergence. So effectively you end up iterating only the non-linear sub-block. Note that this is also the reason it usually makes sense to write your "components" in such a way that you minimise the number of non-linear dimensions (rather than litter the whole matrix with the Newton-derivatives), even if that means you have to increase the size of the full system.

This is one of those techniques that is actually really old (eg. it's sort of implied already in papers dating back to the 70s at least; it could be even older), but rarely (if ever) explained in an engineer friendly way.
Preferred pronouns would be "it/it" because according to this country, I'm a piece of human trash.