I'd like to finish my synthesizer but it's hard

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Java does have garbage collection that can cause a pause in operation. However, you can force it to run at non-critical moments, so it's not really an issue. If you run it often enough, it won't be noticed at all with cut scenes or window opening/closing. Still not a good choice for an audio plugin. :wink:

I've looked at Go as a possible language of choice, but I think it was the garbage collection that put me off it. The only other languages out there that I liked were Rust, but it was lacking in cross-platform graphics bindings at the time. Same with Nim, which has the added plus of transpiling to C, which can then be run through clang for awesome optimization. It also had the same graphics issues. I think they both have bindings now to SDL, but I only need like four drawing routines to draw my GUI. SDL was just overkill for my purposes.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

Oh, I just remembered how I fixed the finalizer issue. It was actually easy, one needs to send the texture handle to the thread that created it, via a Go channel.

Where you're mentioning the languages, I think JAI could become good. It does not have any special memory management, and I believe the idea is, that it has very good language ergonomics.

Post

beginzone wrote: Sat Jul 11, 2020 3:05 am
Where you're mentioning the languages, I think JAI could become good. It does not have any special memory management, and I believe the idea is, that it has very good language ergonomics.
Has Jai been released yet? It was still a work in progress when I read about it.

Forgot: the language I've been following is Zig. It's a C replacement with name spacing.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

Unfortunately no. The beta is currently still a closed beta, which - I guess - is a good thing, but I'm curious. The compiler seems to be really fast, and I agree on so many points Jonathan Blow makes in his stream. Little things. E.g. in Go I can do "nested functions" via the first-classness of functions, so in the inner scope I *can* get functions in Go, but they have to be defined before they are used.

With Jai you get real nested functions, the "you are not forced to sort your functions by hand" approach like some languages have it goes from the global scope to the innermost scope, in Jai. It's really cool.

Or, in Go you get "please, no unused variables" errors from the compiler all the time. Which is kinda clean, and it especially (well, or only) saves you from from a programming mistake. Usually that is, that in an inner scopoe you would be shadowing a variable of an outer scope, to conditionally set the outer thing. And all depends just on a little ":=" versus "=".

So, that error happens so often. And Go saves you from it, but just with a cheap measure. And Jai currently does nothing about it, but when Jonathan Blow did the mistake the second time, he knew he had to do something about it, and decided to go *exactly* against the error prone thing. So, by saying "if unused variable *plus* shadowing" happens, the Jai compiler will complain.

I'm not sure, but for some reasons, I would change from Go to Jai without hesitation. And it gets SIMD intrinsics. It's a C++ replacement in my eyes. Well, it's made to be exactly that.

Post

syntonica wrote: Sat Jul 11, 2020 2:30 amSame with Nim, which has the added plus of transpiling to C, which can then be run through clang for awesome optimization.
Using C as a compilation target is fairly popular as it lets you get you up and running without having to worry about the low-level compilation details. Even C++ started this way. It doesn't automatically mean much in terms of performance (one way or another) though as this largely depends on how directly your language constructs can be mapped to those of C. The main problem with this approach though is that you need to be careful not to let the feature-set of C guide your language design, as there are plenty of things you can do efficiently on the assembler level, but not necessarily on the C (or even LLVM bitcode, which has somewhat of a C-flavor) level.

Post

beginzone wrote: Sat Jul 11, 2020 8:55 am Unfortunately no. The beta is currently still a closed beta, which - I guess - is a good thing, but I'm curious. The compiler seems to be really fast, and I agree on so many points Jonathan Blow makes in his stream.
The two main defining features of Jai seems to be that it's designed by Jonathan Blow and that twitch chat seems to think this must automatically make the language really great. The language does advertise some potentially interesting features, but interesting features alone don't necessarily make a great language once your projects start to get bigger. So we'll see.

As far as compilation speed, there is somewhat of a reverse relationship between fast compilation and fast resulting code. There are some aspects of C++ (eg. potentially compiling megabytes of headers for every file) that can slow the compilation down, but in many cases much of the actual compilation time is spent on optimisations and you can usually make your compiler run a whole lot faster simply by optimizing less; less mature compilers tend to be faster simply because they don't implement all the slow optimizations that make production compilers produce better code.

[edit: One can argue whether all of these optimizations are always desirable, but that's somewhat of a different thing. The point is that advanced optimizations often require expensive analysis that somewhat necessarily make the compilation slow.]
With Jai you get real nested functions, the "you are not forced to sort your functions by hand" approach like some languages have it goes from the global scope to the innermost scope, in Jai. It's really cool.
Nested functions are found in most languages (with functions) other than the C-family, but they are only really truly useful when you also get lexical-closures. The problem with lexical-closures though is that they are really not functions, but rather objects. Objects need to have their life-time managed. This is something you can largely hide from the user in a managed language, but it is also why C++ explicitly separates functions and lambdas from each other and also let you specify how the lambdas should perform the closure capture; lambdas are not functions, but rather objects with a single method.
Or, in Go you get "please, no unused variables" errors from the compiler all the time.
clang -Wall -Werror

This implies -Wunused-variables. I would actually suggest that you always compile with -Wall -Werror and then explicitly turn off any warnings that you might specifically want to ignore, because clang is actually pretty decent at finding all kinds of problems where the code is theoretically valid, but probably isn't going to work the way you want.
Which is kinda clean, and it especially (well, or only) saves you from from a programming mistake. Usually that is, that in an inner scopoe you would be shadowing a variable of an outer scope, to conditionally set the outer thing. And all depends just on a little ":=" versus "=".
clang -Wall -Werror -Wshadow-all

Post

beginzone wrote: Sat Jul 11, 2020 3:05 am Oh, I just remembered how I fixed the finalizer issue. It was actually easy, one needs to send the texture handle to the thread that created it, via a Go channel.
You should really not be destroying textures from a finalizer in the first place.

Garbage collectors are designed to manage heap memory. They are not designed to manage other resources. You should never rely on a finalizer to release any type of resource that isn't heap memory. OpenGL textures are not heap memory, hence you should not rely on the garbage collector to manage them for you. The most you should do in a finalizer is to sanity check that the non-heap-memory resource was properly disposed of before the finalizer runs and then crash the whole program with an error message if this isn't the case.

edit: In fact more often than not, you should probably not be using finalizers at all. There are some legitimate use-cases (eg. persistent object-IDs for serialization, maybe resource leak detection as well), but they are completely unsuitable for the kind of RAII-style resource management that people usually try to abuse them for.

Post

mystran wrote: Sat Jul 11, 2020 11:39 am
beginzone wrote: Sat Jul 11, 2020 3:05 am Oh, I just remembered how I fixed the finalizer issue. It was actually easy, one needs to send the texture handle to the thread that created it, via a Go channel.
You should really not be destroying textures from a finalizer in the first place.

Garbage collectors are designed to manage heap memory. They are not designed to manage other resources. You should never rely on a finalizer to release any type of resource that isn't heap memory. OpenGL textures are not heap memory, hence you should not rely on the garbage collector to manage them for you. The most you should do in a finalizer is to sanity check that the non-heap-memory resource was properly disposed of before the finalizer runs and then crash the whole program with an error message if this isn't the case.

edit: In fact more often than not, you should probably not be using finalizers at all. There are some legitimate use-cases (eg. persistent object-IDs for serialization, maybe resource leak detection as well), but they are completely unsuitable for the kind of RAII-style resource management that people usually try to abuse them for.
I'm a bit used to the following thinking: Either I like to use a language where I create all and destroy all, myself, assuring that all seems to be "balanced", and leak-checking with some tool maybe.

Or... I like a language where I create all, destroy nothing, and don't ever think about "balanced" code.

But I guess you're right. That is not reality. It's just that I sometimes treat my PC as a black box, don't care what memory is where in it, and want it to behave visually "like everyone would expect". I mean, I try to act as if it was easy, because, the language should let me be... a fool. Or so.

May I ask, you use C++, right?

Post

beginzone wrote: Sat Jul 11, 2020 12:14 pm I'm a bit used to the following thinking: Either I like to use a language where I create all and destroy all, myself, assuring that all seems to be "balanced", and leak-checking with some tool maybe.
The problem is that the GCs are only designed to manage memory and they don't really provide any guarantees on when garbage is collected, just that they will try to make sure that you won't run out of memory with reachable objects.

Consider a typical generational collector that has a small "nursery" heap where newly create objects are placed. Whenever this heap fills, we'll figure out which objects are reachable and which objects are not. Then we copy the reachable objects into the next generation and reset the nursery. Only when the next generation also gets sufficiently full do we actually collect garbage there.

The problem now is that if you put some other finite resources (eg. file handles, OpenGL objects, whatever) into the GC heap and they end up surviving long enough to end up in an older generation (or worse yet, some GCs might eventually promote them to "permanent" objects that are normally not collected at all), it might take a very long time before the GC is even going to look at them. Since such objects are usually pretty small from the GC point of view, but might be fairly limited from the system point of view, it is entirely possible that you can completely run out of such resources before the GC bothers to even looking at them and (importantly) from the heap management point of view this is often the right thing for the GC to do.
May I ask, you use C++, right?
Most of the time I write C++ for native application (including plugins), sometimes C if I care about ABI level interoperability. For various tools I typically use either Python or Lua these days, but I've written a non-trivial amount of code in a fairly large number of languages over the years, including some custom ones (eg. I've actually written a few garbage collectors too; back when I was playing around with osdev I wrote a half-decent malloc too).

Post

mystran wrote: Sat Jul 11, 2020 1:14 pm The problem now is that if you put some other finite resources (eg. file handles, OpenGL objects, whatever) into the GC heap and they end up surviving long enough to end up in an older generation (or worse yet, some GCs might eventually promote them to "permanent" objects that are normally not collected at all), it might take a very long time before the GC is even going to look at them. Since such objects are usually pretty small from the GC point of view, but might be fairly limited from the system point of view, it is entirely possible that you can completely run out of such resources before the GC bothers to even looking at them and (importantly) from the heap management point of view this is often the right thing for the GC to do.
I had never knew this.

When I tried to write a game with Go and OpenGL, I knew that I wanted to do runtime.GC() on each level-finished scene. But didn't I get to this part at all. With Lua it was similar. But I'm not sure if runtime.GC() is guaranteed to call all finalizers that I would expect to be called.

In my synth app, for the sample player, I let it stream huge plugin-grabbed orchestra multi-samples, and the app doubles it's used RAM on demand and reads from SSD into RAM. So I possibly fill up all my 64 GB of RAM. So I added a "clear cashes" button, and that calls runtime.CG(), but I'm not as sure what I do to my PC as you, I believe. I hope it will just be a big rompler. :-)

But maybe, Go is actually ... well it's like this: Certain languages promise you not to see a certain type of errors anymore. And that triggers a feeling where you're willing to work with it, or, to work on some project at all. When I went from C to Lua it was like this, and this time I went from C++ to Go... and... well, I'll see.

Post

mystran wrote: Sat Jul 11, 2020 10:57 am
syntonica wrote: Sat Jul 11, 2020 2:30 amSame with Nim, which has the added plus of transpiling to C, which can then be run through clang for awesome optimization.
Using C as a compilation target is fairly popular as it lets you get you up and running without having to worry about the low-level compilation details. Even C++ started this way. It doesn't automatically mean much in terms of performance (one way or another) though as this largely depends on how directly your language constructs can be mapped to those of C. The main problem with this approach though is that you need to be careful not to let the feature-set of C guide your language design, as there are plenty of things you can do efficiently on the assembler level, but not necessarily on the C (or even LLVM bitcode, which has somewhat of a C-flavor) level.
Being able to use an extant compiler goes quite a long way. It gets you plenty of optimization that may not be available in less mature compilers.

I had a brief flirtation with FreePascal to see if there were any benefits to switching over. Unfortunately, even using the latest beta compiler, optimized Pascal was not even as fast as the unoptimized C++ version of my plugin. The finished product was slower by 50%, on average. If it was only 10% slower, I'd be happy as a clam! That would be an okay tradeoff for the ability to have a single codebase on a single machine and the ability to cross-compile for any target. With C++, I have way more platform-specific code sprinkled about than I like dealing with. The Pascal version properly relegated all that to single Mac/Windows files that once complete, I'd never have to touch again.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

I've gone way beyond what I had planned for my first synthesizer and I largely "blame" this on the fact that I had and have to learn a lot of things which in turns gives you at least one extra reason to go for that extra feature. But right now I'm sort of saturated and I think I'll have to retool my sound engine a bit (make it more modular, since right now it's written as an instrument more or less, a bit of a fixed design) so I put all my Github feature issues to lower priority and try to spend more time working on fixing things that really need fixing so it comes to a state of completion. And then I can hopefully move on. Lessons learned, synthesizer made: next!

I am also eager to do some proper optimization (though my C++ code is far from slow as it is, but that's largely because outside of audio I'm a rather experienced developer) - but I kept the promise not to go anywhere near the type of optimizations that make it increasingly hard to retrofit or change the design and/or algorithms. That has helped me a lot too, though it's a bit off topic :)

To give you an idea what "derailing into the land of feature creep and adding random features" means for me (including an archaic UI that is not meant to be released commercially for obvious reasons):

Image
Image

Post

superplek wrote: Tue Aug 11, 2020 12:42 pm I am also eager to do some proper optimization (though my C++ code is far from slow as it is, but that's largely because outside of audio I'm a rather experienced developer)
As long as you don't play "Beat the Compiler", both GCC and clang will do wonders on optimization. Just write clean code. These compilers know more about optimisation than us mere mortals will ever know. Unless you have a better algorithm, just leave it and move on.

Frankly, the latest versions of these compilers impress me to no end. It's nice to just be able to code and reap the speed benefits of -Ofast / -O3. (They both auto-vectorize so you don't have to. Unless you enjoy the intrinsic pain. :hihi: )
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

Oh I'm down with what you're saying, I've been programming for over 25 years starting with low-level environments. I do however review what my compiler spits out (usually according to profiler results) to see if I can avoid the compiler generating certain instructions or constructions that aren't optimal for one or more reasons. Just to name something.

All I tried to say is that generally I get good results as it is and foremost I don't really optimize because I'm in constant R&D unless it's harmless low hanging fruit. But at some point I'll touch the SIMD intrinsics; it so happens I'm all too familiar with that type of programming on a plethora of platforms ;)

Post Reply

Return to “DSP and Plugin Development”