About CAT

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

The distinguishing factor between an audio-controlled signal and a human-controlled signal is the rate at which samples are received. The "control signal of audio" is the same rate as the audio, and it comes from the same place just about. The interface between them is working at the same rate. Other control signals may be generated at a lesser rate. Sometimes, putting this slower rate control signal in a function argument is all that is required! There may be no more efficient way to do it at the lower rate. There is a big grey area in this rate dynamic where some mechanisms may be better suited than others to generate the least overhead.

I have made a plugin with VST2 that allows the user to change the input of a parameter based on either the graphical interface (mouse), the standard host provided parameter modulation methods (midi, etc.), or an external audio signal (sidechained audio signal). The results are always the same, but the execution time is much longer when the audio signal is used. This is not a result of the plugin interface! But somewhere in between audio rate modulation and no modulation is a breakover point in performance where some interpolation may outperform.

Modulation is also generally smoothed, because lots of processes simply do not work well or do nothing of signifigance when modulated at higher rates. Some parameters simply do not modulate at all.

So, maybe we can agree that parameter modulation is a creative concern. Something to think about.

Post

I see. And I'm for the thing that is faster and less heavyweight.

There is a problem topic in my musical world which I call determinism, or rather the lack thereof.

For example pitch bends. Plugin A does it with range X, plugin B does it with range Y. So, e.g. I change the plugin for getting a different tone. But I didn't ask for a different melody. But the melody will be different. Then I search for ways to set the pitch bend range, or try to see if it's stored in the synth patches, or what's going on.

Or vibrato. When I would use an Integra 7, I get different interpretations of vibrato from one tone to another.

My impression is that note expressions or pitch lines in Bitwig want to give me determinism. I would be able to say what's going on, and changing something would create no moment of surprise when you have changed e.g. "from a dark to a bright tone", or something like that.

So, if something makes the plugin perform bad, don't do it, I don't want to kill the format. But if there could be found any determinism that is light on CPU, I'd love it.

That's why I asked whether it's defined if smoothFrames=0 would result in a clearing of the smoothing filter, or what filter it actually is, or how it tries to reach the target value. I see, there's no good thing to add to the automation structure, currently, but if anyone had an idea to make it behave the same from plugin to plugin, it would be really cool.

Post

mystran wrote: Thu Jul 09, 2020 7:02 pm
wrl wrote: Thu Jul 09, 2020 6:38 pm I mean, if we're all just sharing ideas. I'm mostly just opposed to the "list of linear segments" approach. Too complicated, (almost) nobody implements it.
Right. I feel like a better approach would be just events with a smoothing interval (eg. number of samples). Even if you don't bother implementing the exact intervals, you can use the host provided value to guide your heuristics and most importantly it allows for the differentiation between "smooth" and "jump." Even if you use such a scheme to implement full "list of linear segments" it is usually easier to deal with a single list of events as opposed to a bunch of separate arrays for different parameters.

I should probably add that if the host generates an event for every 32 or 64 or whatever samples, then the actual nature of smoothing doesn't usually make a whole lot of difference either, but it is useful to explicitly know when the next event should be expected if the automation is supposed to be continuous.
So, if I understand it correctly, the problem in reality is, that the format developers can't determine whether the plugin developer is going to implement an "unspoken" requirement.

By that I mean actually two corner cases A) and B), of which the latter one looks hard to require.

A) As I understand it, at the "snap point" event (smoothFrames==0) the plugin should really be capable of setting the parameter to the "snap point" (targetValue).

B) But, maybe more important so, if there is a snap point, a following event that contains a targetValue should be approached by the smoothing filter system in a straight line, correct?
Because, why wouldn't it. The edge is there already because of the snapping in the previous event, why should the system then go on with some curvature for reaching the next target?
But I see, how it could "curve" or "not reach the target value", because the plugin's smoothing system is not reset at the snap point.

And helping the plugin developer, to do the only sensible thing in this corner case, is hard, I guess.

So, how could Opi help the plugin developer how to handle A) in the only right way? I guess that isn't so hard, snap is already documented.
But B) is hard. How could Opi help the plugin developer to handle B) in the (in my opinion) only sensible way?

Is it doable by documentation? Or would a requirement for each possible smoothing system be thinkable?

What smoothing systems (that plugin developer would use to implement the current Opi event structure) do exist?

I'm not good here. All I know from scratch are
1) linear and
2) param = param*stay + (1-stay)*new_param

The point, why I write all this, is just, that Opi actually allows for high precision without additional cpu usage, and maybe even encourages for it.
But this one thing B) is not really seemingly required, so, from the perspective of the host, the possible high-precision way is not guaranteed to be done by the plugin.

Post

Probably the easiest way to implement linear-interpolation for parameters is to just store the target value, target time-stamp and a backwards per-sample (audio or modulation rate, doesn't matter) delta. Then you compute (targetTime-currentTime). If this value is positive, you multiply with the backwards delta and add to the target value. If it's negative, then we are already past the target time and we just use the target value as-is. If another event with a new target arrives, we first use this same method to figure out where we are with regards to any potential previous interpolation, then use that (rather than the previous target) when computing the backwards delta. Note that you don't really need the previous value at all, you just need the slope, target and the point in time where you hit the target.

This approach can trivial deal with any sort of overlapping events on top of each other, always producing some sort of continuous piece-wise linear interpolation curve. It also allows you to choose a faster code-path on per-block basis for anything that doesn't need interpolation at all. If you want to snap, you just set the targetTime to the current value so that the interpolation interval is effectively zero.

edit: The nice thing about this kind of scheme is that it can deal with inconsistent data. If the user is turning a MIDI knob or adjusting a GUI parameter on the fly, you don't really know when the next event is going to arrive, but you can try to make some sort of a heuristic estimate (eg. based on the size of the jump and/or the time-intervals between previously observed automation events) and it doesn't really matter much whether the estimate is correct or not, you just end up with slightly too little or (preferably) slightly too much smoothing.

Post

A) So, the total system should be able to handle planned, composed songs where I do things for 5 minutes, and then press play.

B) And on the other hand, the total system should make a live performance artist happy, where things are to be interpreted as fast as possible and as correct as possible.

And my personal wish is, that the "composer" gets the same "automation interpretation" from every plugin. see A)

And my personal wish is also, that the "live artist" gets the same "automation interpretation" from every plugin. see B)

-------

I just would find it cool to see something in Opi that forces everyone to do the best interpretation, for example the system you have just described.

But when forcing to unit-test this, the hard problem would be, that the plugin developer would have to see what "in effect full-rate automation" he is actually using with his plugin. And he would have to unit-test this against the required interpretation. Using full-rate buffers.

Because, there is no way for Opi to provide a predefined get_perfect_parameter(current_frame) for every plugin programmer, or is there? Such a inline function would not compile to fast code, or would it?

Sorry, determinism really has me here. I was too frustrated by VST plugins in the past. ;-)

Post

I just would find it cool to see something in Opi that forces everyone to do the best interpretation
i think, the plugin format should give the plugin the number of transition/smoothing samples - as mystran's proposal does - but not specify the transition curve/method. it should be up to the choice of the plugin implementor to choose the appropriate curve, because only the plugin implementor can possibly know, what is most appropriate. imagine a simple gain plugin and the plugin receives an automation event to transition from its current gain value of 0.25 (in normalized units) to 0.75 within 100 samples. if the format specifies that there should be a linear transition between *normalized* parameter values, it might not be the best thing to do. instead, it may be better to convert current and target value to dB and do a linear transition in the dB domain and convert back - the rationale is that this should give a "perceptually linear" transition. similar considerations apply to frequencies (a linear transition in the pitch (i.e. log-frequency) domain) may be more meaningful. ...i'm assuming here, that the plugin interface uses normalized parameters in the range 0..1. of course, it could also use another scheme, supporting arbitrary ranges by reporting min/max values to the host and possibly even mapping functions - but i think, that would be overly complicated
Last edited by Music Engineer on Sat Jul 11, 2020 4:34 pm, edited 1 time in total.
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

I understand. But I was coming from some other direction, some other picture in mind.

I thought, a new plugin format: Let's give it full determinism. This was my only and overall wish.

1) Ok, simple approach: full rate automation. Then someone says: Too slow.

2) Ok, then linear segments. Then someone says: Too complicated.

3) Ok, what about adding definitions to the format. Then a possible answer is: Of the definitions there are, some or most of them are exactly right. (May I phrase it like this?)

Yes, the transition should be appropriate to the gain plugin. But in my mind this would all be a different layout, a different order.

1) Get user input
2) Send the input data to the plugin with best compromise between fast and accurate
3) Only now the plugin transforms it to it's own wishes.

"Linear" would have been only a unwanted case of decompression error (approximation error), but not the plugin format forcing anyone to do something in a linear fashion that should not be linear.

Actually the word linear only landed in my thinking because of trying to reconstruct the best signal from a compressed version of it.

Post

i actually think, even within an instrument, you cannot expect all patches to behave the same way, say with regard to pitch-bend range. some patches may want an octave of bend range, others may want one or two semitones. some patches may be monophonic others may be polyphonic. the sound designer does all of these things intentionally. different instruments can behave in different ways. it would be very restrictive and inflexible otherwise....although, in particular with regard to pitchbend, one could perhaps meaningfully just pass a float or double and say: it's always to be interpreted in semitones. the problem here could perhaps be backward compatibility with midi which delivers pitchbend as values in the range -8192...8191 and leaves it to the instrument to decide, whether to map it to an octave, a semitone or any other range...hmmm
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

Yes. I had once an argument like this over a vibrato. I bought an Integra 7 and tried to make funky tune. It would slide and vibrate some tone. Then I switched the patch, and vibrato speed completely changed.

So, my argument then was, I would be happy two have two parameters. The patch designer gives me the inspiration I admittedly have asked for. But I would want a second parameter, which, when used, overrides the original vibrato speed.

Control over little pitch things has become more popular (Cubase, Bitwig) and maybe I'm a bit of a intrusive user, intruding into the patches. And lazy, because I could correct my vibrato notation. But in fact, vibrato with said patch wasn't even able to be set to the speed I liked. Only with the other patch it was.

Post

Music Engineer wrote: Sat Jul 11, 2020 3:51 pm one could perhaps meaningfully just pass a float or double and say: it's always to be interpreted in semitones. the problem here could perhaps be backward compatibility with midi which delivers pitchbend as values in the range -8192...8191 and leaves it to the instrument to decide, whether to map it to an octave, a semitone or any other range...hmmm
To be honest, my yesterday's plugin format had these opcodes:

set_voice_pitch_in_cents
set_voice_pitch_in_hertz
trigger_voice
release_voice

I'm not sure what Opi supports that could be like note expressions. I need to read the header file again. But when I tried it, I tried opaque voice handles, transparent voice indexes, also with some try for voice stealing performed by the host. I actually preferred opaque handles like pointers, but I don't know. I couldn't find any good idea.

And I have no idea about VST3. I should, but the last time, using COM via Go was frustrating to say the least. :hihi:

Post

beginzone wrote: Sat Jul 11, 2020 4:14 pm I'm not sure what Opi supports that could be like note expressions. I need to read the header file again.
It does not, because was not supposed to be any kind of finished spec, but rather simply a starting point for discussion.

Post

beginzone wrote: Sat Jul 11, 2020 4:14 pm set_voice_pitch_in_cents
set_voice_pitch_in_hertz
These are the sorts of things that should be handled in optional convenience methods: converting between cents, hertz, dB, etc. VST 2 has a few of these, but I never cared for them and rolled my own since I had very specific needs. However, they are nice to have as a starting point if you do need to customize them.

Although I'm not sure if you'd set pitch in cents so much as adjust it. :lol:
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

mystran wrote: Sat Jul 11, 2020 5:08 pm It does not, because was not supposed to be any kind of finished spec, but rather simply a starting point for discussion.
and a very good starting point it is, indeed. big thumbs up for your effort! i'd really love this to seriously take off. i'll have to leave now but will make some more detailed comments later...
My website: rs-met.com, My presences on: YouTube, GitHub, Facebook

Post

mystran wrote: Sat Jul 11, 2020 5:08 pm
beginzone wrote: Sat Jul 11, 2020 4:14 pm I'm not sure what Opi supports that could be like note expressions. I need to read the header file again.
It does not, because was not supposed to be any kind of finished spec, but rather simply a starting point for discussion.
I must say, I'll stop trying to explain my unfinished thoughts and implicitly motivating, requiring or disturbing KVR readers with my though-explanations, if it might be the case. Or in other words, I should have contributed something that could be tested against the proposal, or help, or just do something.

But.. can't stop to say something like this... it's so hard! (defining a standard, because I have no broad experience)

I find Opi very good, because a proposal that looks clean isn't easy.

I don't know how I could contribute. And admittedly, at the "live situation" I have deep knowledge gaps. You don't have to explain it, but I was tempted to ask where in the automation is anything overlapping at all. Like, MIDI data that comes live and could revert something, or a buffer size performance issue or so?

The point I try to make: I know much about my wishes, but very few things about your situations of people on KVR. I know not really anything about making plugins. Tried it once, and it did not work out like a motivated project. Actually it's in the big "old vsts" thread hidden somewhere, from 2007, and now so much time has passed and I find I only have learned just a little bit in addition, which feels bad or weird.

And when someone here talks about math, I'm immediately out. Don't know so many fundamental words that you use.

So, I'm glad you did the proposal! Hope.. that maybe Opi goes on. And that Vertion is unbanned.

(Since I started watching Jonathan Blows stream I started explaining myself like never before. He does that. So I kinda started doing that and can't really stop.)

Post

syntonica wrote: Sat Jul 11, 2020 5:38 pm Although I'm not sure if you'd set pitch in cents so much as adjust it. :lol:
:D

I tried to use the units (of the different things) so that it matches somehow. And "key" clashes with keyboard keys, so much that at some time I started to say "pc_keyboard_key".

Scala uses cents in some way. I believe, when there is a dot in the pitch description, it's cents. Like MIDI key * 100. With additional decimal digits after the point. And fractions (everything that has no dot in the string) are frequencies. So I just used that, yesterday, somehow.

Post Reply

Return to “DSP and Plugin Development”