CLAP: The New Audio Plug-in Standard (by U-he, Bitwig and others)

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Not a plugin, but I released an update to yabridge yesterday that lets you use unported Windows CLAP plugins on Linux as if they were regular native Linux CLAP plugins, with full CLAP 1.1 functionality.

(though I do have a couple of plugins here that have had CLAP support since February this year ;))

EDIT: Oops, sorry! I thought I was posting in the tracking thread. Can't remove posts on KVR.

Post

Last edited by OBSOLETE160530 on Sun Oct 08, 2023 4:11 pm, edited 1 time in total.

Post

Hanz Meyzer wrote: Thu Nov 03, 2022 2:36 pm Surely this already was asked, but can't find it... What is the best approach to support CLAP, VST3 and AU, if you start from scratch? I mean, can CLAP be extended to VST3 or so? Do you base your plugin on CLAP and then extend to VST3, or vice versa? Is there a VST3-to-CLAP wrapper? Couldn't find any hint on the github CLAP page.
There’s three strategies

1. Write three plugin wrappers by hands let’s leave this as unacceptable for most folks,

2. Use a framework which will either give you a common denominator, or a common denominator with extensions. Iplug juce and distro all support clap in various configurations. The juce extensions for clap are not part of juce core, but do allow you to escape beyond the common denominator. I’m not sure about the others. This is the strategy we use in surge xt.

3. Write a clap and use the emerging wrappers. We right now have a vst3 wrapper which consumes a clap and presents as a vst3 adapting the apis. Since clap has features vst3 does not, some of your clap paths will never be called, but it works great. This is the approach urs has mentioned for uhe and which I think we might take for some future surge adjacent properties (although we will still use juce for our ui).
Last edited by baconpaul on Wed Nov 16, 2022 12:16 pm, edited 1 time in total.

Post

Oh and the clap wrapper is https://github.com/free-audio/clap-wrapper here

Post

Reaper 6.71 with CLAP (first official release after several months of betas with CLAP support) :party: :party:

Post

lalo wrote: Mon Nov 28, 2022 7:22 pm Reaper 6.71 with CLAP (first official release after several months of betas with CLAP support) :party: :party:
I've had the pre-release installed for a bit but I'm glad to see that it's in the official distribution now.

Post

:tu:

Let’s hope they will finally fix the sample d&d from the DAW timeline to sampler plugins on Mac (other than the native ReaSamplomatic, where it’s still working) that broke at v6.66 or 6.65 :borg:

also window drag to resize VST2 should be fixed too on Mac (but maybe it’s working in CLAP :lol: ), although that never really worked for me flawlessly since i switched last December.

Posted both issues on their forum a good while ago, but haven’t heard anything from the devs wether they acknowledge, or even care about those (let’s hope so, at least)
The GAS is always greener on the other side!

Post

Guys, I'm out all week at AWS re:Invent. I'll try to get to feedback and new posts as I have time.

EDIT: Duh. Wrong thread.
Last edited by teilo on Sat Dec 10, 2022 12:19 am, edited 1 time in total.

Post

Plugin actions being initiated by the host could result in race conditions.

Post

Regarding the thread-pool interface it would probably help
to know the number of concurrent thread and the thread id itself in the callback.
Otherwise I wonder how you handle accumulating buffer or you need to have one buffer per voice.

my 2 cents
Olivier Tristan
Developer - UVI Team
http://www.uvi.net

Post

Well, the plug-in says how many separate tasks it wants to be scheduled. Whether or not the host then distributes that over a number of cores is up to the host.

I think typically one would schedule one task for each voice, but it's deliberately left open to do whichever way the plug-in dev sees fit.

Post

Reading that Epic adopted CLAP for Unreal is fantastic news!
That should open up completely new venues for audio plugin developers.
https://cleveraudio.org/news/half-a-year-in/
Congrats to everybody involved!

Cheers,

Tom
"Out beyond the ideas of wrongdoing and rightdoing, there is a field. I’ll meet you there." - Rumi
ScreenDream Instagram Mastodon

Post

Urs wrote: Thu Dec 15, 2022 3:22 pm Well, the plug-in says how many separate tasks it wants to be scheduled. Whether or not the host then distributes that over a number of cores is up to the host.

I think typically one would schedule one task for each voice, but it's deliberately left open to do whichever way the plug-in dev sees fit.
yes, one task per voice, so at the end you will need to take all voice buffer and merge those into a single common bus, the main one. In my case, I try to reuse the voice buffer to have a better cache handling so I just need a per thread voice buffer.
This info is easy to give to the plugin
Olivier Tristan
Developer - UVI Team
http://www.uvi.net

Post

otristan wrote: Fri Dec 16, 2022 8:33 am
Urs wrote: Thu Dec 15, 2022 3:22 pm Well, the plug-in says how many separate tasks it wants to be scheduled. Whether or not the host then distributes that over a number of cores is up to the host.

I think typically one would schedule one task for each voice, but it's deliberately left open to do whichever way the plug-in dev sees fit.
yes, one task per voice, so at the end you will need to take all voice buffer and merge those into a single common bus, the main one. In my case, I try to reuse the voice buffer to have a better cache handling so I just need a per thread voice buffer.
This info is easy to give to the plugin
So allocate buffers on per-thread basis, let voice tasks use the buffers of whatever thread happens to run the task, then merge per-thread rather than per-voice at the end?

I don't think there's a sane way to do this with the current API. As you say, knowing the number of threads in the pool and having sequential indexes for them that are passed to the task function would enable these kinds of schemes where tasks "borrow" per-thread resources.

With the current API the sane thing to do is just alloc per-voice, but I'd support revision of the API for these extra bits of information; implementation cost should be low, performance benefits should be real even if not necessarily huge and really the only downside is that it'd be a breaking change to API.

Post

mystran wrote: Fri Dec 16, 2022 11:12 am
otristan wrote: Fri Dec 16, 2022 8:33 am
Urs wrote: Thu Dec 15, 2022 3:22 pm Well, the plug-in says how many separate tasks it wants to be scheduled. Whether or not the host then distributes that over a number of cores is up to the host.

I think typically one would schedule one task for each voice, but it's deliberately left open to do whichever way the plug-in dev sees fit.
yes, one task per voice, so at the end you will need to take all voice buffer and merge those into a single common bus, the main one. In my case, I try to reuse the voice buffer to have a better cache handling so I just need a per thread voice buffer.
This info is easy to give to the plugin
So allocate buffers on per-thread basis, let voice tasks use the buffers of whatever thread happens to run the task, then merge per-thread rather than per-voice at the end?

I don't think there's a sane way to do this with the current API. As you say, knowing the number of threads in the pool and having sequential indexes for them that are passed to the task function would enable these kinds of schemes where tasks "borrow" per-thread resources.

With the current API the sane thing to do is just alloc per-voice, but I'd support revision of the API for these extra bits of information; implementation cost should be low, performance benefits should be real even if not necessarily huge and really the only downside is that it'd be a breaking change to API.
I can think of a number of (silly) models
1.) When on 48khz and receiving 480 samples per cycle I can use n * 480 threads, where n is number of channels in order to have a single scale-incoming-sample-to-some factor multiplication per task. This is kind of a implicitely / unmapped / direct mapping of Number to task.
2.) When I prepare an array/list of structures where each element represents the parameters of a single task, i.e. I could split the incoming buffer into 16 task each 480/16 samples long ... this is how to explicitely map the tasks. I could even put a lambda with captured parameters in the array...

And actually I think a sane model is more the map/reduce one. In the mapping phase of map/reduce the parallel tasks have a cleanly seperated work context so that there's no overlap or need for syncing.

I don't say this is sound doing, but it is valid. And this is actually a challenge with the approach. A single threading-amok-plugin which says, okay I need to have 1000 tasks per run can spoil the fun... I bet DAWs will introduce a "don't allow to use thread-pool extension" blacklist for plugins.

BTW - How is the thread layout in bitwigs "per plugin" isolation mode, i.e. a process per plugin?

Post Reply

Return to “DSP and Plugin Development”