Apple will switch to ARM processors: what does it mean for plugin developers?

DSP, Plugin and Host development discussion.
Post Reply New Topic
RELATED
PRODUCTS

Post

After reading this article I'm wondering what it means for plugin developers? I'm not knowledgable enough in programming to know if it will make a big difference, but I know that ARM-based Surface laptops can't run the same apps than the Intel ones. So my guess is that it means big change, but I might be wrong?

Post

The article is from 8 months ago, and people have been saying this is imminent for almost 10 years now.

Post

This has been discussed "ad nauseam". Admitting it is true (which I doubt, but eveything is possible to come from them) what it means to developers?

More work, another platform to support, and maybe some new sales. What it means for users? They will continue to be f*** by Apple :roll:
Fernando (FMR)

Post

Cross-platform code should be easy to recompile. But plugins contain lots of low-level optimizations and bit fiddling. And it means that existing code may need a lot of adjustments or rewrites: new simd instructions, changes in atomics (intel reads and writes are atomic for the most part), maybe changes in byte ordering, etc.

Post

Vokbuz wrote: Sat Mar 21, 2020 8:49 am Cross-platform code should be easy to recompile. But plugins contain lots of low-level optimizations and bit fiddling. And it means that existing code may need a lot of adjustments or rewrites: new simd instructions, changes in atomics (intel reads and writes are atomic for the most part), maybe changes in byte ordering, etc.
With regards to atomics: while loads and stores are atomic on ISA level for Intel, if you're writing in a high-level language, you still need more or less the same fences anyway to prevent the compiler optimisations from causing issues. This is not a theoretical issue, compilers really DO screw up your expectations, so if you're not already using the proper fences and atomics, your code is already likely to cause some races and if it doesn't then it's probably just a matter of upgrading your compiler.

Post

mystran wrote: Sat Mar 21, 2020 10:16 am if you're not already using the proper fences and atomics, your code is already likely to cause some races and if it doesn't then it's probably just a matter of upgrading your compiler.
I don't have code without proper atomics and fences, maybe I do but unintentionally. But I'm pretty sure there is a bunch of plugins that have such code.

Post

Vokbuz wrote: Sat Mar 21, 2020 10:46 am
mystran wrote: Sat Mar 21, 2020 10:16 am if you're not already using the proper fences and atomics, your code is already likely to cause some races and if it doesn't then it's probably just a matter of upgrading your compiler.
I don't have code without proper atomics and fences, maybe I do but unintentionally. But I'm pretty sure there is a bunch of plugins that have such code.
Synchronization primitives will generally issue a fence. So if you lock a mutex for example, then you already have a fence there. On the other hand, if you're simply writing memory and then reading it in another thread without any sort of synchronization in between, then you're already relying on the goodwill of the compiler and it might start failing randomly with another compiler or a new version of the same compiler or a change in optimisation settings or whatever.

ps. I've had this happen in practice: old code that worked fine with older compilers suddenly starts running into race conditions or weird glitches with a more recent compiler (or another compiler in my case), with a bunch of fences solving the issues. If you don't properly synchronize, then you have a time-bomb even on Intel.

Post

mystran wrote: Sat Mar 21, 2020 11:01 am Synchronization primitives will generally issue a fence. So if you lock a mutex for example, then you already have a fence there. On the other hand, if you're simply writing memory and then reading it in another thread without any sort of synchronization in between, then you're already relying on the goodwill of the compiler and it might start failing randomly with another compiler or a new version of the same compiler or a change in optimisation settings or whatever.
While I totally agree with these and won't neither do so myself nor urge anyone, let's don't forget about "volatile" qualifier that turns off optimizations on variable access. I saw many times the code that uses volatile non-atomic variable accessible by different threads. Such code worked so far and probably will work regardless of the compiler (unless volatile is deprecated), but will become an issue (very volatile issue) on ARM unless this variable is a simple flag.

Post

Vokbuz wrote: Sat Mar 21, 2020 12:54 pm
mystran wrote: Sat Mar 21, 2020 11:01 am Synchronization primitives will generally issue a fence. So if you lock a mutex for example, then you already have a fence there. On the other hand, if you're simply writing memory and then reading it in another thread without any sort of synchronization in between, then you're already relying on the goodwill of the compiler and it might start failing randomly with another compiler or a new version of the same compiler or a change in optimisation settings or whatever.
While I totally agree with these and won't neither do so myself nor urge anyone, let's don't forget about "volatile" qualifier that turns off optimizations on variable access. I saw many times the code that uses volatile non-atomic variable accessible by different threads. Such code worked so far and probably will work regardless of the compiler (unless volatile is deprecated), but will become an issue (very volatile issue) on ARM unless this variable is a simple flag.
You should not use "volatile" qualifier for synchronization. The only thing "volatile" does is to make sure that (1) volatile reads and writes are done exactly once and (2) no volatile memory access is reordered with respect to other volatile memory access. The problem here is that it says absolutely nothing about reordering other things with respect to volatile access.

With modern compilers, using "volatile" is good for one thing and one thing only: memory mapped IO. That's what it was designed for, because with MMIO the reads and writes are not really reads and writes into cacheable memory (which compilers treat as "implementation detail"), but rather hardware access that can change the state of the device (so the compiler must treat every read and write as an "action").

It used to be the case that compilers were conservative enough with "volatile" that they also didn't reorder other things around such access, but those days are gone: using "volatile" for synchronization is no longer safe, if you care about ordering of anything else with respect to the volatile access (which is usually the case). If you really wanted to build your inter-thread communication around "volatile" access, then you would have to declare all access to shared memory as volatile, and that's usually just pointless (ie. using proper fences is much faster).

Post

mystran wrote: Sat Mar 21, 2020 3:12 pm You should not use "volatile" qualifier for synchronization.
I know that. The problem is that many devs don't.

Post

Vokbuz wrote: Mon Mar 23, 2020 1:22 pm
mystran wrote: Sat Mar 21, 2020 3:12 pm You should not use "volatile" qualifier for synchronization.
I know that. The problem is that many devs don't.
That's good enough reason to spell the details out. :)

That said, my number one piece of advice for all multi-threading is to avoid mutable data sharing at all cost. It's much easier to just send a copy through a message queue and even if you need to design a new type of queue to do it efficiently, it's probably worth it. Not only does this avoid most synchronization outside said facilities, it also tends to boost your performance, because you won't accidentally end up playing cache ping-pong when each thread has it's own clearly specified chunk of data to work with.

Post

First, you throw your hands in the air while screaming in horror! ARM!!!!

Second, you recompile with the latest Xcode. If you have any assembler bits, you may need to tweak the opcodes.

The reality is, if you're developing for iOS, you're already developing for ARM. If you're using OpenGL, make the switch over to Metal before it's too late!

Considering how fast the latest iPad's are, I'm sure we'll see ARMS in the consumer laptops and desktops soon enough. Not sure about the Pro models, if they'll continue to go Intel, but I'm sure Apple wants to divest themselves of dependence upon any chip maker since they were let down by Motorola.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

syntonica wrote: Mon Mar 23, 2020 10:27 pm Considering how fast the latest iPad's are, I'm sure we'll see ARMS in the consumer laptops and desktops soon enough. Not sure about the Pro models, if they'll continue to go Intel, but I'm sure Apple wants to divest themselves of dependence upon any chip maker since they were let down by Motorola.
AFAIK, ARM is not owned by Apple :shrug:
Fernando (FMR)

Post

fmr wrote: Tue Mar 24, 2020 10:29 am
syntonica wrote: Mon Mar 23, 2020 10:27 pm Considering how fast the latest iPad's are, I'm sure we'll see ARMS in the consumer laptops and desktops soon enough. Not sure about the Pro models, if they'll continue to go Intel, but I'm sure Apple wants to divest themselves of dependence upon any chip maker since they were let down by Motorola.
AFAIK, ARM is not owned by Apple :shrug:
No, but their A-series ARM processors are by their own design. I think they farm out the actual manufacture still, but they'll probably have their own fab at some point.
I started on Logic 5 with a PowerBook G4 550Mhz. I now have a MacBook Air M1 and it's ~165x faster! So, why is my music not proportionally better? :(

Post

The desktop market for plugins is saturated, in a market where you can already buy 20 great Compressors and 30 great Equalizers it is difficult to break through with your own product.

>Apple will switch to ARM processors: what does it means for plugin developers?

It means that (almost) all your competition disappears and you have the once-in-a-lifetime opportunity to be "in on the ground floor" and make a ton of sales.

Post Reply

Return to “DSP and Plugin Development”