AUv3 in Xcode: how do I set it all up?

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

Hi everyone,
Long-time lurker here, lots of fantastic resources around these parts.

I am an audio DSP engineer (mostly working in Max/MSP, and sadly Matlab where I have to use it for research), for a long time I've been wanting to get in to the plugin dev game. I LOVE working at the bit-level (done lots of embedded projects too), getting as low level as possible to manipulate and control samples is what I live/eat/sleep/breathe for; gen in Max/MSP was a very close experience to what I've been wanting to do as an audio DSP guy. My native language of C/C++ has meant that the beckon of the plugin world has been calling me for some time, I just never seem to be able to figure out how the CoreAudio API and AU sample code are set up. Apple's severe lack of documentation on the topic doesn't help much either.

I have a few specific questions, but I've also got a few broad questions. This all stems from the headache of figuring out how Core Audio and AUv3 packages/projects are set up in Xcode.

Firstly, a bit about me, so you know my level of understanding of systems. I am an electrical engineer with extensive training at the doctoral level in DSP and communications theory. Convolution and transform operators - as well as direct-form implementations and filter coefficient calculations - are second nature to me. I also have graduate-level training in synthesis, perception and psychoacoustics, composition, and room design/modeling from a filter-based perspective. Graduate-level mathematics courses as well, so that's not a problem for me. This seems to be my biggest problem with how-to's and tutorials on building plugins, it seems they present that audio DSP math is hard and requires way more study than c++ frameworks and apis; I already have that side of things down, and then some. Even the sticky post on introduction to coding audio plugins focuses almost entirely on the theory and math side of things, rather than programming. I want to learn how this system is put together, so that when I'm making modifications I know how the ENTIRE system is reacting. This is why I've avoided JUCE (in addition to still being a poor student and not wanting to shell out for their licensing fees, and also I avoid extension classes as I've seen WAY too many drop dev and become deprecated or outdated), AudioKit, and Amazing Audio Engine. I also don't want to be restricted to default classes or built-in DSP functions; I want to write my own custom effects rather than using stock reverbs/delays/chorus/eq/phasers/etc.

I've long been wanting to use AUs (and possibly VSTs) to code plugins. Why not just use Max/MSP or Matlab to do my processing? Well, that's my bread-and-butter; all of the hard work I put in to developing algorithms are in HOW everything is laid out to produce sound. I don't want to give away all of my algorithm design and layout by sharing a Max patch. Sure I know there are decompilers that could probably reverse-engineer my stuff with some effort, but if that's the hoop people are willing to jump through to get my ideas then I'd probably respect them rather than be irritated; and it doesn't demonstrate my methodology in clean, easy-to-understand layouts either.

Recently, I picked up this wonderful book on CoreAudio: https://www.amazon.com/Learning-Core-Au ... 0321636848. It's done wonders for me, helping me understand the Core Audio API and how the low-level system architecture is put together. I've learned Core Audio is NOT intuitive, but it IS very elegant. The more I code, the better I understand the system. This led me to revisiting Apple's example AUv3 code, which I've looked at a number of times and have NEVER been able to wrap my head around; there's just too much happening at once. On my last revisit of the example Xcode project, I was able to finally at least figure out WHERE the processing/DSP was happening and what functions were starting to be called at what point, but it still seems to be a tangled mess.

Which, finally (thanks for sticking with me) leads to my questions:

Firstly, it's my understanding that Core Audio functions are called as OSStatus functions, with pass-by reference vectors/arrays of pointers. The data gets manipulated within the function (e.g. stuffing data into a vector or grabbing audio from the driver, or filtering data) and the vector that was passed by reference now contains the new or manipulated data. The function then returns whether or not the process was successful (with noErr or 0 being success, and anything else being a char consisting of four letters that yield the error code). Is my understanding of this correct?

Secondly, how exactly does memory allocation work? If I want to create a circular delay buffer and start stuffing audio into it (and then adding it back to the raw incoming stream), is it as simple as setting up a float* vec[bufferSize], creating a write pointer in a for loop, accessing an individual element in the input buffer, storing it, and then reading it at some future time with read/write pointers? Or does Core Audio have a more elegant way of handling this?

Thirdly, for AU v3s, it's my understanding that an AU is basically an extension of type .appex, and this gets packaged into a bundle (similar to the v2 .audiounit bundle), which then gets dropped into the ~/Plugins/Components folder in macOS. Is this correct?

And lastly - and this is my biggest question - I have no idea how to set up AU projects in xcode. The Apple sample documentation is old and outdated, and the v3 example gives ZERO road map as to what's happening, where, and when. So, if I were to open up an empty Xcode project, what steps do I need (aside from linking AudioToolbox and AudioUnit to my project) in order to start programming AUs? Basically, how do I get from an empty project (or even a template) to the processing for loop?

Post

So I've answered most of these questions for myself by digging around, and I'd figure since I always hated when I dig up threads like this and see them unanswered (or worse - the dreaded "nvm I figured it out") posts, I wanted to clarify these things for anyone else who's in my position in the future.

-Re OSStatus functions: Yes, CoreAudio functions are called as OSStatus, with data being acquired/manipulated through pass-by reference or pointers. The CoreAudio documentation is VERY poor, and this is non-intuitive but it is elegant and clean. I recommend the book I linked in my first post for anyone that wants to get into CoreAudio in the future - it will not explain how to build AU plugins but the reference and bits of sample code WILL help you detangle how/where to process audio. It starts off slow, but picks up quickly; chapters 7 and 8 hold the meat and potatoes of what you're looking for, but Chapters 1-3 are really important for gaining familiarity with the CoreAudio API and (if, like me, you were new to Objective C but know ANSI-C/C++) Objective C.

-Constructors: Still figuring out where to put constructors that the plugin will use/access, such as delay buffers or constructor classes I want to build. For example, I have a biquad class that I coded, where it calculates the coefficients from corner frequency and Q; I still don't know where to call my biquad::biquadFilter *filter[] constructor. If anyone here could weigh in, that would be greatly appreciated.

-Bundles and Extensions: This one took a LONG amount of time for me to figure out. As of this posting, it seems that AUv3s are still not fully documented, or supported. They CAN be built, but it seems Apple is still working out kinks. So AUv2s it is for now. I found a few templates online, so far this one seems to work best in combination with the AUv2 API (which, as of now, Apple still seems to provide on their website): https://github.com/kbob/AudioUnitTemplates and it allows you to easily follow Apple's v2 documentation on building a plugin with a generic view very cleanly.

Now is also a good time to mention, as of right now Apple recommends against coding AU plugins in Swift as the compiler base still varies from version to version, so if a plugin is running a different Swift compiler than the host then the plugin and host won't play nice with each other. So Objective C it is, for now.

Eventually, it seems that when v3 AUs are fully supported, the method will be as simple as overriding the input buses, output buses, and render functions to call back to/from the host, which will be much cleaner... once it is implemented. But for the time being, v2 AUs are just significantly easier to code and the documentation/tutorials are in much greater number. In fact I can't find ANY tutorials on how v3 AUs are coded, just sample code with little-to-no documentation.

-How to set up AU projects: see the above link, it's legitimately the best way to get things working. Learn from my headache. On macOS Sierra 10.12.3 and Xcode 8.2.1 you need to put the template and v2 CoreAudio API into the /*yourusername*/Library/Developer/Templates folder, but it works... for now.

Post

Yet another update, as I keep answering my own questions, and want to document for others (okay, and partially for myself):

Constant arrays and variables that are used in the processing loop get initialized in the header file, under private member variables. Directly from Apple:

"To begin the DSP implementation for the tremolo unit, add some constants as private member variables to the [PluginNameUnitKernel] class declaration. Defining these as private member variables ensures that they are global to the [PluginNameUnitKernel] object and invisible elsewhere."

This should be MORE than enough for me to start finally writing my own plugins. I gotta say, this took quite some time but the more I disentangle how the API is set up the more I realize how to put it all together.

Post

Sorry to see you all alone on your own thread, so I'll add my 2 cents which may or not apply.

I too found the documentation for all the plugin types very daunting when I started developing my plugin about a year ago. After some research, I decided to use the JUCE framework which basically takes away all of the details and differences of each of the plugin types and unifies them. That lets you write your plugin generically and it's framework also has many, many useful classes as well.

I don't mean to shill for JUCE, but it's let me write my DSP code once and basically click checkboxes for AU, AUv3, VST, VST3 and AAX (although Avid definitely likes to complicate things as much as they can) on Mac and Windows.

Hope that's helpful.

Post

If it offers any consolation, we have not understood how AUv3 works. Their programming guide (whatever) is like "there are too many details to let you know".

One of the guys has figured out that there's some kind of built-in wrapping technology from V2 -> V3, but of course there's no meaningful example code.

We're going to bite the sour apple and put a developer on this for as long as it takes, but my bet is that V2 will survive quite a while longer.

Post

I think most of the AUv3 developers have started here:
https://developer.apple.com/library/con ... TP40016185

I good overview is provided here too:
https://developer.apple.com/audio/

AUv3 it's not easy but when you get the point... :D You have to master the architecture split Extension/Framework/Host... I can't tell you how many times I've read the sample, but at the end it's paying off! The sample code is now quite stable, Apple did a great job. I advice you to keep the sample code in GIT, so you can monitor easily the difference between the releases. It helps a lot ;)

If you need a template, forget it... Take the sample code, rename the plugin name by your plugin name and prepare to drink a lot of coffee because the build will probably failed. I've learn a lot during this phase :)

The AUv3 concept is wonderful, really! but I think it's not yet mature. Developers (like me) submit bugs and most of them are taken into account (but you have to be patient). Furthermore, UIKit is iOS only so if you plan to write all the GUI part once... you will be disappointed because it's not yet supported in OSX...

I hope I give you some help to start your project.

Post

Last but not least, prepare yourself to surf between C++,Swift and Objective-C :D

Post

joshb wrote:Sorry to see you all alone on your own thread, so I'll add my 2 cents which may or not apply.

I too found the documentation for all the plugin types very daunting when I started developing my plugin about a year ago. After some research, I decided to use the JUCE framework which basically takes away all of the details and differences of each of the plugin types and unifies them. That lets you write your plugin generically and it's framework also has many, many useful classes as well.

I don't mean to shill for JUCE, but it's let me write my DSP code once and basically click checkboxes for AU, AUv3, VST, VST3 and AAX (although Avid definitely likes to complicate things as much as they can) on Mac and Windows.

Hope that's helpful.
Thanks for chiming in. Of course, this is probably the simplest solution to address all of the problems I've mentioned above.

I HAVE dug through JUCE, and even worked for a few companies that use it. I do have to admit, it is very nice and clean. But I have two problems with it, the first of which is easier to explain: for the time being, I want to avoid having to pay additional licensing fees, especially when they seem to be recurring and even more so given that I am on a student budget.

The second problem is a bit more complex: JUCE abstracts too many of the lower-level functions for my tastes. I simply want to be at a lower level than what JUCE offers. Yes, JUCE IS very slick and it avoids every single one of the headaches that I've outlined above. But whenever I use JUCE, I find myself wanting to know the HOW of everything under the hood so I can take full advantage of low-level and efficient DSP code that I write myself. I could probably argue that my best bet would be to go fully embedded and get a dedicated DSP kit, however whenever I HAVE gone that route I've found it's more trouble than it's worth, and even still the notion of deploying code that I/others can use in a fully-built product is a headache and a half to consider in its own right.

But yes, if I hadn't had these two hangups, I'd have probably gone the JUCE route.

Post

FredAnton wrote:I think most of the AUv3 developers have started here:
https://developer.apple.com/library/con ... TP40016185

I good overview is provided here too:
https://developer.apple.com/audio/

AUv3 it's not easy but when you get the point... :D You have to master the architecture split Extension/Framework/Host... I can't tell you how many times I've read the sample, but at the end it's paying off! The sample code is now quite stable, Apple did a great job. I advice you to keep the sample code in GIT, so you can monitor easily the difference between the releases. It helps a lot ;)

If you need a template, forget it... Take the sample code, rename the plugin name by your plugin name and prepare to drink a lot of coffee because the build will probably failed. I've learn a lot during this phase :)

The AUv3 concept is wonderful, really! but I think it's not yet mature. Developers (like me) submit bugs and most of them are taken into account (but you have to be patient). Furthermore, UIKit is iOS only so if you plan to write all the GUI part once... you will be disappointed because it's not yet supported in OSX...

I hope I give you some help to start your project.
Thanks for your reply, it gives me hope. To first address your follow-up post to this, I have found myself becoming very comfortable in Obj-C; I already know a decent chunk of the basics C++ and Swift pretty well. Some things like inheritance still throw me for a loop but as I continue to dig in to AU it's starting to make more sense; I think it's a matter of trial-and-error on my behalf and just continuing to dig away at it.

I do have to say, returning to the example code now that I have familiarized myself with the nitty-gritty of CoreAudio and its member functions, is MUCH easier. I'm FINALLY at a point where I can pick apart all of the code and easily make my way to the process loop. Everything that is built on top of it still makes my head spin, as much of it seems to be getting the UI to play nice and implementing both graphical input and parameter control - neither of which are things I want to involve myself with (I'd be happy with the generic AU view for the time being, to be entirely honest). But how it is put together is slowly making sense. I still don't understand how the entire package management is put together (is it a .appex wrapped in a bundle? is the bundle really just a simple host that is controlled by the application i.e. the DAW? and so on), but I'm hoping that will come in time.

The most frustrating thing is that at the end of the day I really just want to code the process() loop and have Apple (or another programmer) handle the UI for me, and that seems to be not so easy to do with AUv3. I'm going to try splitting my time between developing in AUv2 and learning the new AUv3 framework - if you're saying it can be done, I'll throw my time and efforts at learning it.

If you wouldn't mind, I may be PMing you or make posts asking you here with some highly specific questions in the near future, so that hopefully I (and other raw DSP folk) can get better access to something Apple claims is clean and easy yet seems to be anything but in its implementation.

Post

Urs wrote:If it offers any consolation, we have not understood how AUv3 works. Their programming guide (whatever) is like "there are too many details to let you know".

One of the guys has figured out that there's some kind of built-in wrapping technology from V2 -> V3, but of course there's no meaningful example code.

We're going to bite the sour apple and put a developer on this for as long as it takes, but my bet is that V2 will survive quite a while longer.
Agreed! It is VERY frustrating, to say the least. The WWDC video claims v3 should make everything easier and cleaner, but I've been finding the framework does the opposite. Even the header bridge that wraps v2 into v3 claims to be a simple header that you add and a single line of code you're supposed to write, but apple gives zero instruction on how to do this.

As an aside, thanks for making awesome products. Loads of my friends use your stuff and it always sounds nothing short of absolutely brilliant.

Post

wforwumbo wrote:I HAVE dug through JUCE, and even worked for a few companies that use it. I do have to admit, it is very nice and clean. But I have two problems with it, the first of which is easier to explain: for the time being, I want to avoid having to pay additional licensing fees, especially when they seem to be recurring and even more so given that I am on a student budget.
Then it's great, one less problem :p

I agree witht he second one, JUCE feels a little like Qt, but it is better than WDL-OL in terms of support, so...

Post

wforwumbo wrote:Thanks for your reply, it gives me hope


:tu:

Never give up, be patient. You want extra motivation? Since my last post here I've realesed my application as beta. So you see... I'm sure you will success too. And yes I've used the generic view too in Logic Pro (OSX), it helps me to make a lot of test. And on the other hand I've made the UI for iOS with UIKit.

Post Reply

Return to “DSP and Plugin Development”