Building a VSTi of an instrument based on WebAudio?

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Hi everyone,

I am researching how can I build a VSTi for my synthesizer Viktor NV-1: http://nicroto.github.io/viktor/.
The NV-1 is built on top of the Web Audio API, so I will need to be able to embed a browser into the VSTi.
I am looking at the Chromium Embedded Framework (https://bitbucket.org/chromiumembedded/cef) and the Electron framework by Github (http://electron.atom.io/).

Has anyone done something like that? Do you, guys, think this is doable?

Any pointers and guide-lines are much appreciated - I am open to all feedback.

Cheers!

Post

This is a great idea, and something I've been hoping to work on for some time now. Unfortunately, it'll probably be a quite difficult task. If I were doing it, I'd copy and paste the native processing web audio nodes from the chrome source code, and call their processing from the VST callbacks. This way, you get the lowest possible latency. Then, I'd create javascript bindings for these nodes, in a manner very similar to the web audio api. I'd then inject these bindings into the web context, whether you use CEF or electron. If you're okay going apple only, WebKitView might be the easiest of all to embed (you can check out my audiounitjs framework if you're interested in going down that route: https://github.com/russellmcc/audiounitjs). If you're targeting windows, CEF and electron are the main choices right now. After you have bindings to the audio processors in your web context, it's a "simple" matter of editing your web code to use these "vst web audio" nodes instead of the web audio nodes. You'll also of course need to make automation bindings for your web view so that the knobs, etc follow host automation. For tips on this you can check out audiounitjs, although of course the VST API is a bit different. If you made the bindings api-compatible with web audio, this should be the easy part.

Anyways, I've thought about this a fair bit, and it sounds like a fun project. That said, I think it's going to be really difficult. Let me know if you have any specific questions. I think this will be way more valuable once web audio comes out with audio workers, in which case you could have custom nodes that are implemented by javascript on the web, and either javascript or high performance C++ on the desktop, depending on performance requirements. This would be awesome because people could get the exact same sound on the web and the desktop, while maintaining the ability to use optimized code on the desktop. So, my audiounitjs project is on hold while the web audio team figures out audio workers. After that happens, I'll probably do a mac-only, audio-unit only version of what you're talking about.

Post

I don't think it sounds like such a good idea! Web browser, Javascript engine per plug-in = really heavy on resources and all sorts of threading problems i'd imagine. Better to port a C++ plugin to webaudio api via emscripten.

We're not quite ready to go fully live with this yet but check out this:

http://www.webaudiomodules.org/

SMC2015 paper:

http://www.webaudiomodules.org/docs/wam ... mc2015.pdf

Post

logicalhippo wrote:This is a great idea, and something I've been hoping to work on for some time now. Unfortunately, it'll probably be a quite difficult task. If I were doing it, I'd copy and paste the native processing web audio nodes from the chrome source code, and call their processing from the VST callbacks. This way, you get the lowest possible latency. Then, I'd create javascript bindings for these nodes, in a manner very similar to the web audio api. I'd then inject these bindings into the web context, whether you use CEF or electron. If you're okay going apple only, WebKitView might be the easiest of all to embed (you can check out my audiounitjs framework if you're interested in going down that route: https://github.com/russellmcc/audiounitjs). If you're targeting windows, CEF and electron are the main choices right now. After you have bindings to the audio processors in your web context, it's a "simple" matter of editing your web code to use these "vst web audio" nodes instead of the web audio nodes. You'll also of course need to make automation bindings for your web view so that the knobs, etc follow host automation. For tips on this you can check out audiounitjs, although of course the VST API is a bit different. If you made the bindings api-compatible with web audio, this should be the easy part.

Anyways, I've thought about this a fair bit, and it sounds like a fun project. That said, I think it's going to be really difficult. Let me know if you have any specific questions. I think this will be way more valuable once web audio comes out with audio workers, in which case you could have custom nodes that are implemented by javascript on the web, and either javascript or high performance C++ on the desktop, depending on performance requirements. This would be awesome because people could get the exact same sound on the web and the desktop, while maintaining the ability to use optimized code on the desktop. So, my audiounitjs project is on hold while the web audio team figures out audio workers. After that happens, I'll probably do a mac-only, audio-unit only version of what you're talking about.
Very helpful. Thank you very much. I will think about all this and if I get stuck, I'll write again.

Post

hibrasil wrote:I don't think it sounds like such a good idea! Web browser, Javascript engine per plug-in = really heavy on resources and all sorts of threading problems i'd imagine. Better to port a C++ plugin to webaudio api via emscripten.

We're not quite ready to go fully live with this yet but check out this:

http://www.webaudiomodules.org/

SMC2015 paper:

http://www.webaudiomodules.org/docs/wam ... mc2015.pdf
Thanks for the reply. Looks like a great project. The thing is - I am trying to do the opposite - I want to enable writing a plugin in Javascript.

Post Reply

Return to “DSP and Plugin Development”