What did people do with time-varying filters prior to 2014?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

S0lo wrote: Wed Mar 27, 2024 12:02 pm
soundmodel wrote: Wed Mar 27, 2024 10:22 amBut this is what I was trying to suggest several times, but as I am inexperienced, then I cannot use the proper vocabulary. But the main naive idea I had is just that frequency-domain things necessarily need windows or "finite snapshots", and windowed-processing leads to discontinuities or at least difficulties in managing continuities.
Sorry my terminology might have caused a confusion. By analysis I don’t mean processing. I meant mathematical derivation. Totally different things. There is no windowing involved or anything like that.

One of the reasons to use frequency domain derivation is because it’s much easier in many cases. For example, the equivalent of a transfer function (Z domain) in discrete time domain is a recurrence relation which is not always the easiest thing to manipulate, understand and design for.
An impulse response is a "window" out of many possible impulses in a non-LTI case?

https://dsp.stackexchange.com/a/41841

"The whole concept of a time variant impulse response only works if the time scale on which time variation happens is significantly slower than the length of the impulse response. That allows treating the impulse response as piece wise "time invariant" at least during the interval of the convolution."

Possibly, this same thing explains why fast modulation blows an LTI-theory based filter up?

This also makes it sound like time-varying with impulse responses is making things really hard. Probably only useful for very accurate analog-convolutions.

But this paper says they did such processor without problems with modulation https://ntnuopen.ntnu.no/ntnu-xmlui/bit ... sequence=1. Now I don't understand where the problem with fast modulation comes from.
Last edited by soundmodel on Wed Mar 27, 2024 5:29 pm, edited 2 times in total.

Post

Err. Sorry. Wrong post
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

The one thing I’m sure of is, once the system becomes none-LTI, the transfer function is no longer valid. Not valid. As in wrong, as in not defined. In other words, we don’t have a mathematical tool in frequency domain to work with none-LTI. You have to resort to time domain.

Now you might find newer/recent research that claim otherwise, I’m not aware of that. Could be very much for my lack of knowledge, as I’ve been off this topic for a while.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

But as engineers. As you might expect, We don’t just stop because theory is not sufficient. The transfer function can be still useful to approximate or have an intuition about what an none-LTI system will do, and how it would behave. May be we can even prove its validity in specific cases.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

So you're talking about understanding the processor using frequency plots in order to inform how to implement it using time-domain filters?

Well maybe, but sounds like a case for component-models.

This is all they did prior to 2014 or so?

Post

No actually I’m talking about the pure math that you can and can not use in the derivation of the filter code from its transfer function (or from its analog topology if any)

Example, case 2 above, was unstable because the derivation is wrong. Since it used frequency domain derivation in a none-LTI filter.

Direct forms are unstable at fast modulation for the same reason. RBJ, same reason. Math is wrong in none-LTI case.

Edit: to be fair, direct forms where never designed for the none-LTI case.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

soundmodel wrote: Wed Mar 27, 2024 6:08 pm So you're talking about understanding the processor using frequency plots in order to inform how to implement it using time-domain filters?

Well maybe, but sounds like a case for component-models.

This is all they did prior to 2014 or so?

I kinda missed your point there didn’t I ?

We can understand allot about poles and zeros just by looking at a transfer function. Even when its none-LTI, its useful.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

S0lo wrote: Wed Mar 27, 2024 6:21 pm No actually I’m talking about the pure math that you can and can not use in the derivation of the filter code from its transfer function (or from its analog topology if any)

Example, case 2 above, was unstable because the derivation is wrong. Since it used frequency domain derivation in a none-LTI filter.

Direct forms are unstable at fast modulation for the same reason. RBJ, same reason. Math is wrong in none-LTI case.

Edit: to be fair, direct forms where never designed for the none-LTI case.
I've thought about this.

It seems like the first DSP filter designers were focusing on translating analog filters to digital. Yet they are using mathematically a bit wrong tools for the job.

Again, does this promote the idea that one can be advised to forget everything prior to the SVF, TPT and ZDF business and just study those for the non-LTI filters?

Post

soundmodel wrote: Thu Mar 28, 2024 8:47 am
Again, does this promote the idea that one can be advised to forget everything prior to the SVF, TPT and ZDF business and just study those for the non-LTI filters?
It could be that these filters are more accurate AND can modulate faster without any side effects.

Post

soundmodel wrote: Thu Mar 28, 2024 8:47 am Again, does this promote the idea that one can be advised to forget everything prior to the SVF, TPT and ZDF business and just study those for the non-LTI filters?
For practical use, for some one who just needs readily made filters. Yeah, why bother. Take the SVF or Salen Key or whatever filter from Andrew Simper papers or "The Art of VA filter design" book. These are very good filters that can be easily used as building blocks. They'll just work whatever you throw at them.

For someone who wants to emulate analog circuits, TPT is a very good starting point. For someone who wants to design his own filters from scratch completely in digital form, It is useful to have all options at hand. TPT fully relies on digitizing analog structures. Someone may or may not want that for certain reasons.

For example, the trapezoidal integration used warps the whole analog spectrum from 0Hz to ∞Hz and shrinks it to fit between 0Hz to Nyquist (half sampling rate). This (in some cases) has the side effect of adding zeros that originally existed at ∞Hz (invisible zeros at the denominator) and bringing them to nyquist. This is useful some times, but may be sometimes not. Its also why the real analog circuit might sound a bit brighter than it's digitized version because of that attenuation at nyquist. And while prewarping fixes the cutoff location, AFAIK it doesn't fix that frequency warp/shrink itself.

I have to say again that I still don't fully understand these concepts, so I say the above with a bit of reluctance. I could be corrected.

Again I like to have all options. TPT is a good option. But I'd also like to have the option to completely design a filter in digital form. Without analog structures. And with the ease of using a transfer function in Z-plane and yet have it stable in the none-LTI case. Thats basically what I was trying to do with that CPS in the rest of the article. It's not without side effects either. I haven't reached a satisfactory point yet.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

S0lo wrote: Fri Mar 29, 2024 7:41 pm For example, the trapezoidal integration used warps the whole analog spectrum from 0Hz to ∞Hz and shrinks it to fit between 0Hz to Nyquist (half sampling rate). This (in some cases) has the side effect of adding zeros that originally existed at ∞Hz (invisible zeros at the denominator) and bringing them to nyquist. This is useful some times, but may be sometimes not. Its also why the real analog circuit might sound a bit brighter than it's digitized version because of that attenuation at nyquist. And while prewarping fixes the cutoff location, AFAIK it doesn't fix that frequency warp/shrink itself.

I have to say again that I still don't fully understand these concepts, so I say the above with a bit of reluctance. I could be corrected.
So let me try to help you fully understand it, because it's really not very complicated at all. If you do understand the concept of "Riemann sphere" (don't panic if you don't.. though I'd suggest searching for some videos about it, 'cos we kinda want to use it as a visual thinking too; I'm not a visual thinker actually, but in this case it's really probably the easiest path) as a model of the complex plane then this is actually very simple, the bilinear transform (=trapezoidal rule) is a Möbius transform that rotates the Riemann sphere in such a way that the line (= circle through the "point at infinity" on the extended complex-plane/Riemann sphere) on imaginary axis of Laplace gets rotated to the unit-circle of the z-transform plane. So basically we map the entire extended (ie. with "point at infinity") complex plane onto a sphere, rotate that sphere and then map back to the complex plane.

So what's with the and pre-warping stuff then? Well, it has to do with the way we mapped the inside of the unit-circle to half the sphere and the entire rest complex plane all the way to infinity to the other half of the sphere. You could go ahead and do the trigonometry as an exercise (if you really want to), but for our purposes it is sufficient to know that if we pick any angular frequency wc in continuous-time pre-transform, then it will end up at the discrete-time angular frequency wd=2/T*atan(wc*T/2)
or in terms of frequencies fc=2*pi*wc and fs=1/T,
fd=2*fs*atan((2*pi*f)/(2*fs))=2*fs*atan(pi*f/fs).

So the trapezoidal integration in a sense puts the frequencies through a saturation function, which happens to be that of atan()... which is nice, because atan() has an inverse! So.. if we design a continuous-time filter where we place the cutoff to 2*fs*tan(pi*f/fs) then after it gets put through the atan() as part of squeezing the imaginary axis onto the unit circle, our cutoff ends up where we want.. and then when we divide by 2*fs to get the coefficient we need for our trapezoidal integrator we end up with the familiar tan(pi*f/fs) tuning formula.

Nothing magical about it, the bilinear transform (=trapezoidal integration) always maps the same way, the digital frequencies are atan() of the analog frequencies, but we can tweak (ie. scale on the frequency axis) the analog filter so that one chosen frequency ends up exactly where we want it. This need not be the cutoff, you can literally pick any frequency you'd like to preserve, but only one.

Post

Thanks for the details mystran. I think the mapping using atan and consequently the two complex planes (S to Z) is fairly understandable from your explanation. I may need to look into Riemann spheres further, I watched a few videos.

I did have a few questions when I had a fresh mindset about this years back. It escapes me now. One I think related to how the definition of Z = e^(S*T) would actually map versus the bilinear transform which as I understand is an approximation of that. But it's way beyond this thread's topic I guess. I may post a new thread about this if I finally get to it again.
www.solostuff.net
Advice is heavy. So don’t send it like a mountain.

Post

S0lo wrote: Sat Mar 30, 2024 9:11 pm I did have a few questions when I had a fresh mindset about this years back. It escapes me now. One I think related to how the definition of Z = e^(S*T) would actually map versus the bilinear transform which as I understand is an approximation of that. But it's way beyond this thread's topic I guess. I may post a new thread about this if I finally get to it again.
The problem is.. z=e^(st) is not polynomial, so you don't get a rational transfer function. Mapping just the poles (and perhaps zero) by z=e^(st) leads to impulse invariant transform (or pole-zero mapping if we map the zeroes too).. but these suffer from aliasing of the response.

Post

S0lo wrote: Wed Mar 27, 2024 6:21 pm No actually I’m talking about the pure math that you can and can not use in the derivation of the filter code from its transfer function (or from its analog topology if any)

Example, case 2 above, was unstable because the derivation is wrong. Since it used frequency domain derivation in a none-LTI filter.

Direct forms are unstable at fast modulation for the same reason. RBJ, same reason. Math is wrong in none-LTI case.

Edit: to be fair, direct forms where never designed for the none-LTI case.
Yes it does seem so:

Suppression of Transients in Variable Recursive Digital
Filters with a Novel and Efficient Cancellation Method
Vesa Valimäki and Timo I. Laakso
http://legacy.spa.aalto.fi/sig-legacy/s ... 1998j2.pdf

They do make a nice tool for "from analog" models though. Sadly not for the time-varying case.

Post Reply

Return to “DSP and Plugin Development”