Moog Ladder Filter - Research Paper

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

I don't have too much practical experience with NR in a zero-delay-feedback setting going beyond some simple experiments, so cannot help you with constructive advice here ;)

Post

Z1202 wrote: Tue Feb 06, 2024 7:56 am Of course you can simply limit the number of iterations, but then, do you have an estimation of what your resulting precision is? Again, this precision will most likely depend on the parameters and possibly even on the state and worst case it'll actually diverge rather than converge.
The potential difficulty with limiting the number of iterations is that this can then leave the system into a weird state for the next time-step, potentially causing further issues with convergence.

I've had some success with tanh (or sigmoids more generally) simply detecting the condition where there is a sign change (and perhaps an increase in magnitude) from one iteration to the next and forcing such values to zero in order to restart from a more "sane" position, but this does hurt convergence in terms of the number of iterations required, potentially quite significantly. Interestingly formulating tanh as an LTP of two pn-junctions with a bit of "parasitic" resistance thrown in to allow for a voltage difference potentially converges more reliably, but this obviously roughly doubles the size of the system you have to iterate and with (roughly) O(n^3) complexity for the linear solution, this does have a significant performance cost.

Especially with (modest) oversampling using the values from the previous time-step as the initial guess does work pretty well most of the time. In this case, especially with low-pass filters you expect the signal to change "slowly" relative to the sampling rate, though with sigmoids a "better" initial guess might be more successful in avoiding the divergence situations. It's tricky though, because where the previous guess works, it potentially works better than other simple estimates and if you have to do significant computation just to come up with an initial guess, then you're going to slow down all those time-steps where 1 or 2 rounds of NR (from the previous values) would have been enough.. which is actually pretty common when things go well.

Post

Due to the fractal nature of these non-linear systems I think you never can be sure if there is not a better or even worse solution in the next iteration step. you'll need to prune at some point. However this will only significantly affect certain critical sections of the fractal where ot gets 'instable'. In the case of a filter i'd say it is where self-oscillation starts
We do not have a support forum on kvr. Please refer to our offical location: https://www.tone2.com/faq.html

Post

I'm not sure what is fractal there. The system is perfectly smooth and infinitely differentiable. The upper assessment of the solution precision can be done at least by assessing the derivative ranges. Whether the next step is going to improve the solution is a different story, though, but that has nothing to do with fractality. Unless you refer to the "fractality" of the NR algorithm per se.

As for the range where the convergence problems can occur, I'm not sure if it's just high resonance. A sufficiently high cutoff can be equally problematic.

Post

It's tricky though, because where the previous guess works, it potentially works better than other simple estimates and if you have to do significant computation just to come up with an initial guess, then you're going to slow down all those time-steps where 1 or 2 rounds of NR (from the previous values) would have been enough.. which is actually pretty common when things go well.
https://www.kvraudio.com/more/7569-340498
That's pretty much what I was thinking. I'll probably just stick to this for now and consider adding some conditions later if it seems to be acting up.

Great news though!! I FINALLY got the realtime c++ code working! :D :D
I went through all the effort of unfolding the Jacobian inversion and matrix multiplication / addition / subtraction, and replacing the Eigen library with standard arrays, when I realized that I had a single sloppy mistake in my previous code. (a = instead of a -). In my python prototype, the unfolded linear algebra equations were about 2 times as fast as the LU solver. I'm not sure how that translates to C++ yet... I haven't dove into the whole profiling thing yet. My C++ code is only taking about 1-2% of my CPU without oversampling though (2013 Mac), as apposed to my previous 100% in the previous version with the mistake. I'll have to go back tomorrow and try to build the old version again with that fixed, I'm curious how it compares.

I also was playing around with the tanh approximations. In Python, the approximation function was actually the slowest, with the math.tanh() beating out both it and numpy's tanh (also faster than the approximation). I'm not sure how this compares to the CMath tanh() though.

I guess I'll have to actually get around to publishing my paper properly now as well. Huge thanks to everyone on here for all of your support and interesting discussions!

Post

pscorbett wrote: Wed Feb 07, 2024 8:26 am My C++ code is only taking about 1-2% of my CPU without oversampling though (2013 Mac), as apposed to my previous 100% in the previous version with the mistake. I'll have to go back tomorrow and try to build the old version again with that fixed, I'm curious how it compares.

I also was playing around with the tanh approximations. In Python, the approximation function was actually the slowest, with the math.tanh() beating out both it and numpy's tanh (also faster than the approximation). I'm not sure how this compares to the CMath tanh() though.

I guess I'll have to actually get around to publishing my paper properly now as well. Huge thanks to everyone on here for all of your support and interesting discussions!
Go ahead, great paper! And btw, this method works fine with oversampling, too. We have been using NR on virtual-analog filters almost a decade ago in realtime and with oversampling. It is very efficient once optimized, and also stable.

Richard
Synapse Audio Software - www.synapse-audio.com

Post

pscorbett wrote: Wed Feb 07, 2024 8:26 am I also was playing around with the tanh approximations. In Python, the approximation function was actually the slowest, with the math.tanh() beating out both it and numpy's tanh (also faster than the approximation). I'm not sure how this compares to the CMath tanh() though.
This is likely because it takes the Python intepreter a lot more time to process the multiple operations of the approximation than it takes to call a library function written in some more efficient language (eg. C or C++ typically).

This is why building on top of library functions as much as possible is usually the way to go for "scripting" languages like Python... but when you're writing low-level (eg. C++) code you're on equal footing with any library, it all runs directly on the hardware anyway and now you're in a position to make tradeoffs like perhaps calculating less accurate approximations so as to spend less time than a library function.

Post

Richard_Synapse wrote: Wed Feb 07, 2024 11:08 am Go ahead, great paper! And btw, this method works fine with oversampling, too. We have been using NR on virtual-analog filters almost a decade ago in realtime and with oversampling. It is very efficient once optimized, and also stable.
In a sense it can even work better with oversampling, because the input and output tends to be smoother, changing less from one time-step to another, so the probability that you need to iterate NR a lot is actually lower. :)

Post

That's great news! I found the resonance definitely misbehaves a bit at higher frequencies and values so hopefully it can help with that too. And any harmonic generation that might alias from the tanh's.

Post

Also since we've been discussing performance (and why predicting C++ performance by looking at Python performance doesn't work very well), I want to also bring up one important point that I've seen some people not fully understand: you also generally can't make estimates about the performance of optimized release C++ builds by looking at the performance of non-optimized debug builds, because compilers often do ridiculous (in terms of performance) things in debug mode in order to make debugging as easy as possible (ie. all variables are there to inspect, control flow doesn't jump around randomly, perhaps even across functions when observed at source code level.. etc).

Now, simply compiling debug symbols into a binary does not hurt performance (the debug information itself has no impact on generated code whatsoever in any compiler I'm aware of), but when you're trying to evaluate performance, always do it with the full optimizations that you intend to enable for release... because whether or not some optimizations are enabled can and will flip the relative performance of different methods, sometimes by a lot.

Obviously something like a quick or heap sort is typically going to beat bogosort whether it's Python or debug vs. optimized C++ build.. but when the performance gap is not enormous due to algorithmic reasons, you really want to compare in the final form you're going to ship.

If this is already obvious .. then apologies for wasting your time reading it. :)

Post

pscorbett wrote: Sun Jan 28, 2024 6:27 pm
The one pole formulas laid out in the paper (and in Urs's blog) are as follows:
I_linear = g*(V+ - V-)
I_OTA = g*tanh(V+ - V-)
I_ladder = g*(tanh(V+) - tanh(V-))
The third one is from this paper:

NON-LINEAR DIGITAL IMPLEMENTATION OF THE MOOG LADDER FILTER
Antti Huovilainen
DAFX 2004
https://www.dafx.de/paper-archive/2004/P_061.PDF

And if you check the references in the paper you will see this:
A. Gratz, Operational Transconductance Amplifiers
original now broken link: http://www.compuserve.de/Stromeko/diy/OTA.pdf
now here: http://synth.stromeko.net/diy/OTA.pdf

But the tanh derivation for an idealised differential transistor as seen in an OTA has been derived many times before that, it's just that the Achim Gratz pdf is the most thorough one around.

I have tried to encourage Vadim to reference original publishers of these approximations, not sure if he's done that yet.
The Glue, The Drop - www.cytomic.com

Post

andy-cytomic wrote: Thu Feb 08, 2024 3:43 am I have tried to encourage Vadim to reference original publishers of these approximations, not sure if he's done that yet.
So here comes our old discussion again. I thought we're past it...

I've referenced Antti's paper since that's the one I've originally seen the derivation in and the one you and me argued about, and I've also did some basic digging to find the authors of earlier works about ZDF or equivalent ideas. But otherwise I was only referencing the sources which I actually used and/or which directly impacted the book.

AsI mentioned in our earlier discussion (I believe), I don't really have experience with library research and don't really know how to find all the relevant papers. A colleague of mine, who has much more experience in this, was suggesting to help me build a more extensive source of references when he'd be working on a project requiring a similar kind of library research. But, AFAIK, that project never happened. So I'd need to invest quite a bit of time and possibly money just to dig up all the relevant sources.

Again, as our previous argument goes, while I find this appropriate and fair to do when someone is doing a paid research, I don't find it too fair to demand this extra time/money investment from someone doing this work for free. Giving the credit where I have directly benefited from someone's source or help is still a must, and I tried to do so from the best of my memory, also trying to mention the original sources that I'm simply aware of, even if I didn't use them. This also doesn't require much more time investment than just typing the text. But having to track all the relevant research history is something else.

Edit: BTW I've seen a conference paper which, instead of crediting my book for the TDF2 TPT SVF (or maybe crediting your works, not sure if you switched to TDF2 by now), credit some later paper. This is totally fine with me, since apparently it's this latter paper, which they were using as a source. (Although IIRC their wording seemed to me kind of suggesting that it's this latter paper which was the original source of the TDF2 SVF, which is not exactly correct :D )

Post

Thanks if you added Antti's paper as a source, I appreciate that on behalf of him, sorry for not remembering this. I must admit the last time I checked one of your papers was a long time ago and when I did there were no references, I'm glad to hear this has changed. FWIW I'm all fine with referencing more recent papers, just any form of references that covers most of the sources a paper draws on to build upon is great.
The Glue, The Drop - www.cytomic.com

Post

Z1202 wrote: Thu Feb 08, 2024 8:15 am ... my book for the TDF2 TPT SVF ...
Can you post a link to the place where it can be bought? I want to support your work.

Thanks
We do not have a support forum on kvr. Please refer to our offical location: https://www.tone2.com/faq.html

Post

Tone2 Synthesizers wrote: Thu Feb 08, 2024 11:57 am
Z1202 wrote: Thu Feb 08, 2024 8:15 am ... my book for the TDF2 TPT SVF ...
Can you post a link to the place where it can be bought? I want to support your work.

Thanks
I'm appreciate your interest, so far the book is only available as a free download and there are no plans for printed versions. If you really want it in a printed form, copy shops are one easy option.

Post Reply

Return to “DSP and Plugin Development”