Analog Modeling: What is actually being modeled?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

A good article on modelling plugins with interviews:
http://www.soundonsound.com/sos/aug10/a ... lugins.htm

Post

Jace-BeOS wrote:Korg calls their stuff CMT. Component Modeling Technology. So, does that mean the MS-20, PolySix, et al, are literally modeled at the component level?
I'm working on it...so far I've modeled ALL of the components in the VCA section with a multiply :)

Seriously, it's market-speak to the point that I can't find anything more than the vague statement about modeling components. And we assume that means electronic components, but does it mean components on the functional level (e.g., modeling an exponential converter instead of the the bits that make up an exponential converter)? (Their market-speak statement does limit the concept to the audio signal path, IIRC, so that particular example is just to illustrate the functional versus atomic definition of "component").

Now, if the audio path has a voltage divider to drop the gain, would that really be modeled as two resistors and their interaction, or a multiply? (Just pointing out that no matter what their definition of "component is, it still doesn't mean there is a one-to-one relationship between components and "models".

I would say that "component" modeling makes more sense for some synth devices than others. Filter probably more so; oscillators...well, you could say you're modeling the components, but unless you're running a really high sample rate, it seems that any "component" modeling would have little one-to-one correspondence with reality.

Anyway, the Korg statement seems intentionally vague, in order to leave it up to the imagination of the reader. And you can take it two ways: either they are so brilliant that they understand how analog components can be described in digitally sample systems (wait—you did assume that they didn't just model the components, but modeled them perfectly, right?), or you could conclude that they needed to model the components because they didn't understand the circuits and how their functions could be recreated digitally very well :wink:
My audio DSP blog: earlevel.com

Post

Ichad.c wrote:
Urs wrote:Well, Korg is a good example. Next to no-one has a model of the MS-20 filter.
And no-one has a Polivoks model either :shrug:

No capacitors.

The dual saw bass - is the meanest thing I've heard from an analog.

http://www.soundonsound.com/sos/jul10/a ... saudio.htm
Wow... the pulse bass example really does it for me. Yowzers! :love:
http://sendy.bandcamp.com/releases < My new album at Bandcamp! Now pay what you like!

Post

Ichad.c wrote:
Urs wrote:Well, Korg is a good example. Next to no-one has a model of the MS-20 filter.
And no-one has a Polivoks model either :shrug:

No capacitors.

The dual saw bass - is the meanest thing I've heard from an analog.

http://www.soundonsound.com/sos/jul10/a ... saudio.htm
I believe the Polivoks does have capacitors, they most likely use the ones inside the op-amps that are usually there to maintain the stability of feedback loops. The keywords here are "op-amp compensation" and the 741 is a classic that has this http://zebu.uoregon.edu/~rayfrey/431/notes9.pdf

I've not deciphered the particular chip they used in the Polivoks schematic and checked, but from this DIY page they are using the internal capacitor of the op-amps in an SVF structure: http://www.supersetsounds.com/2010/12/d ... le-filter/
The Glue, The Drop - www.cytomic.com

Post

AXP wrote:A good article on modelling plugins with interviews:
http://www.soundonsound.com/sos/aug10/a ... lugins.htm
This was excellent. Thank you for sharing this link.

I think I have a better grasp on this now. It's not my goal to do modeling, merely to comprehend what might be involved. And sure enough, it appears several methods are involved. I have to say that the black box approach is less about modeling than emulating. Also, the idea of being able to some day just edit an electronics schematic and have software on a computer do all the work in real time to model everything... It's cool, but I think that fundamental changes in computer tech have to arrive first. It seems we're at a bit of a plateu on CPU speed increases and once you go above 8 cores, the performance starts to actually degrade (plus, you can't parallelize all types of code to split it up onto cores and we're far from fully effective/efficient multi core use).

I'm still waiting for someone to figure out realtime acoustic environment modeling WITH human binaural hearing modeling. :-D
- dysamoria.com
my music @ SoundCloud

Post

Urs wrote:Well, Korg is a good example. Next to no-one has a model of the MS-20 filter. I believe that's because there's no intuitive match in digital filter theory. Biquads, ladders, svfs and such don't get you there, as one integrator is also a differentiator.
Actually the Sallen Key filter, which is what the MS-20 uses, has a very natural translation to digital - it's a tiny bit more complicated but the results are actually better since they don't require compensation and work over the entire frequency range with decoupled cutoff and resonance controls and quick coefficient calculation. It has very low cpu usage and sounds better than an SVF to my ears once the original circuits non-linearities are added. I'll publish a paper on it some day when I get a chance since it is really a cool algorithm that can fine use in applications where you want a decent sounding filter with minimal cpu where exact analog tone isn't required.
Urs wrote:OTOH it's simple enough to understand from the component side. If one understands how they interact, one can put a match in code, without ever reading a book on bilinear transforms.

That IMHO is component modeling.
The bilinear transform should be taken out and shot, it is horrible and should never be used as it results in digitally abstracted brittle monstrosities.

I would disagree with the latter comment. I would call component modelling something that uses individual values of resistors, capacitors, diodes, and transistors etc and solves for the voltages and currents around these components keeping how they are connected in tact. With a component model you should be able to put a probe on the actual circuit and read the same values as placing a virtual probe at the same point in code.

Otherwise, in circuit modelling terms, it is called "macro modelling", which when done right is absolutely fine as well. Usually there is a combination of macro and component models to make up a useful and accurate total model, and this is also the case for people using spice since they also meed to save cpu.

It makes sense to macro model op-amps since otherwise you would have 20 or so transistors to model per op-amp. It also makes sense to macro model a resistor capacitor one-pole high pass or low pass filter if it is buffered from the surrounding circuit. If you have components like diodes or transistors right next to capacitors and resistors then most of the time you can't macro model them and get the right result. Now there are varying different levels of macro modelling depending on what details you want, and a typically macro modelling op-amp in spice actually uses a very complicated internal circuit of resistors, diodes and capacitors as well as various voltage controlled current sources to emulate the combined behaviour of all the individual transistors. Audio plugins have to use even more basic macro models since a full op-amp macro model will still be too slow.

Even then also remember that even these component level models are still just bunches of behavioural hacks to approximate the actual physics of the components in various regions of operation. So even just a single diode is made up of a resistor, a current source, and a (depletion) capacitor: http://qucs.sourceforge.net/tech/node68.html , so three components in total.
The Glue, The Drop - www.cytomic.com

Post

andy_cytomic wrote:The bilinear transform should be taken out and shot, it is horrible and should never be used as it results in digitally abstracted brittle monstrosities.
I'd say, that statement is way too strong. While the BLT in its classical application way is indeed of little use for analog modeling, at the same time the "hype-of-the-year" ZDF filters are typically based on the very same BLT expressed in a topology-preserving way.

Post

Z1202 wrote:
andy_cytomic wrote:The bilinear transform should be taken out and shot, it is horrible and should never be used as it results in digitally abstracted brittle monstrosities.
I'd say, that statement is way too strong. While the BLT in its classical application way is indeed of little use for analog modeling, at the same time the "hype-of-the-year" ZDF filters are typically based on the very same BLT expressed in a topology-preserving way.
You are right, what I hate is actually most implementations of the abstracted filters that are derived from the bilinear transform. I have no problem with different numerical integration methods, including trapezoidal which I use in part for my own products, but when people use the bilinear transform they are then very easily tempted into implementing the filter using the direct form 1 / 2 or transposed type structures, all of which are horrible.

I am really puzzled as to why people think zero delay feedback or the so called "topology preserving" transform is actually something new, it's not at all. All these algorithms have been around since before computers were invented, computers are now fast enough to compute them in realtime, but that doesn't make them new. Have a look at any circuit simulation package, it uses all sorts of cool numerical integration schemes, trapezoidal is just one of them, and they solve highly non-linear equations with no feedback and have been doing so since mainframe days. Audio people seem to have their heads in the sand for some reason, and like making up non-sensical marketing terms for re-inventing only part of the wheel.
The Glue, The Drop - www.cytomic.com

Post

andy_cytomic wrote:
Z1202 wrote:
andy_cytomic wrote:The bilinear transform should be taken out and shot, it is horrible and should never be used as it results in digitally abstracted brittle monstrosities.
I'd say, that statement is way too strong. While the BLT in its classical application way is indeed of little use for analog modeling, at the same time the "hype-of-the-year" ZDF filters are typically based on the very same BLT expressed in a topology-preserving way.
You are right, what I hate is actually most implementations of the abstracted filters that are derived from the bilinear transform. I have no problem with different numerical integration methods, including trapezoidal which I use in part for my own products, but when people use the bilinear transform they are then very easily tempted into implementing the filter using the direct form 1 / 2 or transposed type structures, all of which are horrible.
Even this statement I would not support in it's full generality. Direct form structures may be just fine for strictly linear systems where no (fast) parameter modulation is required, like equalizers.
andy_cytomic wrote:I am really puzzled as to why people think zero delay feedback or the so called "topology preserving" transform is actually something new, it's not at all. All these algorithms have been around since before computers were invented, computers are now fast enough to compute them in realtime, but that doesn't make them new. Have a look at any circuit simulation package, it uses all sorts of cool numerical integration schemes, trapezoidal is just one of them, and they solve highly non-linear equations with no feedback and have been doing so since mainframe days. Audio people seem to have their heads in the sand for some reason, and like making up non-sensical marketing terms for re-inventing only part of the wheel.
I think what is kind of new is application of the techniques to real-time audio DSP which just wasn't feasible with CPUs from a decade ago.

Post

karrikuh wrote: Even this statement I would not support in it's full generality. Direct form structures may be just fine for strictly linear systems where no (fast) parameter modulation is required, like equalizers.
Direct form structures have both poor noise performance, bad coefficient precision problems, and you can't modulate them - three strikes against them and there are even more reasons to not use them. There are better ways, even for static equalizer applications.
karrikuh wrote:I think what is kind of new is application of the techniques to real-time audio DSP which just wasn't feasible with CPUs from a decade ago.
Then people should just say cpus are more powerful and leave it at that then, they don't need to bang on about something and pretend it's innovative when it's only re-inventing part of a much broader body of methods that have been around since before computers existed.
The Glue, The Drop - www.cytomic.com

Post

andy_cytomic wrote:All these algorithms have been around since before computers were invented, computers are now fast enough to compute them in realtime, but that doesn't make them new. Have a look at any circuit simulation package, it uses all sorts of cool numerical integration schemes, trapezoidal is just one of them, and they solve highly non-linear equations with no feedback and have been doing so since mainframe days
I doubt anyone uses such variable step solvers though, they're not even guaranteed to work. Happens quite often that PSPICE gets stuck trying to simulate a circuit, not exactly what we want in audio processing ;)

So finding new ways to solve nonlinear systems in realtime is a perfectly valid research field to me.

Richard
Synapse Audio Software - www.synapse-audio.com

Post

Richard_Synapse wrote: I doubt anyone uses such variable step solvers though, they're not even guaranteed to work. Happens quite often that PSPICE gets stuck trying to simulate a circuit, not exactly what we want in audio processing ;)

So finding new ways to solve nonlinear systems in realtime is a perfectly valid research field to me.

Richard
The variable step size is only for efficiency, I usually set it to be a fixed step size in spice. I agree that new ways of solving non-linear systems is perfectly valid, I have seen nothing new in anything proposed by anyone in the audio field. I would love to see something that is actually new that tackles stiff non-linear equations efficiently, and I believe the person that does it will get not only make loads of money but also receive prestigious prizes in physics.
The Glue, The Drop - www.cytomic.com

Post

andy_cytomic wrote:Direct form structures have both poor noise performance, bad coefficient precision problems, and you can't modulate them - three strikes against them and there are even more reasons to not use them. There are better ways, even for static equalizer applications.
I admit I never bothered to really look at alternative structures as for me, DF always worked just fine with 64bit FP processing (or even 32bit FP). I usually use cascaded transposed DF2 biquads, which is supposed to combine high CPU efficiency with good numerical properties. But if there really are better structures (without higher CPU cost), I would be interested to learn about them!

Post

Non-linear Time Variant systems are basically the real physical world, and the most desired aspect in digital audio. If only it worked just like analog equipment.

Post

andy_cytomic wrote:Then people should just say cpus are more powerful and leave it at that then, they don't need to bang on about something and pretend it's innovative when it's only re-inventing part of a much broader body of methods that have been around since before computers existed.
I think what is being innovative is the wide realization itself, that these approaches can and should be used for realtime audio processing. Plus a number of additional details had to be figured out, such as:
- realization that trapezoidal integration is the one corresponding to BLT and as such should be preferredly used for analog modelling
- realization that zero-delay feedback *should* be used in audio processing (e.g. Oppenheim and Shafer mention this possibility but immediately discard it as useless, IIRC )
- the usage of the concept of a bilinear integrator module in a block diagram. This automatically addresses the question of choice of state variables (particularly raised in Simulanalog article), plus allows to halve the number of state variables by using canonical forms compared to straightforward trapezoidal integration
- etc

Post Reply

Return to “DSP and Plugin Development”