Fathom Synth Development Thread

VST, AU, AAX, CLAP, etc. Plugin Virtual Instruments Discussion
Post Reply New Topic
RELATED
PRODUCTS
Fathom Synth

Post

The difference between live and dead music is the dynamics of the envelopes!
They would raise Fathom enormously.

Thanks for the great job.

Post

Well, the modulation issue I was having with my faux channel pressure ( A IR light beam on a Roland A49 ) turned out to be operating in a room with low light.....go figure. It was a HW issue.
We jumped the fence because it was a fence not be cause the grass was greener.
https://scrubbingmonkeys.bandcamp.com/
https://sites.google.com/view/scrubbing-monkeys

Post

OK, That's good to hear.

2.24 will be out Monday.

Just a few bug fixes.
Plus I moved the spectrum graph processing into the UI thread to improve CPU efficiency.
I also added a spectrum graph smooth setting.

I'm also looking into CUDA Video Card GPU processing.
It's possible to use the NVIDIA GPU as massive parallel math processors.
Originally I was planning using Vector SIMD instructions in the main CPU like most synths,
but that only renders a CPU gain of x4 or x8 depending on the processor.
CUDA processing would be a CPU increase of over x32 or x64 depending on the video card.

The main advantage of CUDA GPU processing is it would enable real time rendering of the individual partials which would open true spectral morphing.

I'll be doing this first thing in 2019 January.

Post

With some of the quick modulations and sequenced type
of sounds, there's a lot of fun to be had with the daw at
:o 20 bpm :ud:
changing the harmonic settings to be more melodic,
adding some glide, setting suitable delays.
Yada Yada

From the Techno folder, I've used Additive Wah
modded for cheeriness, sustaining a low octave,
and playing along a few octaves higher...
Rave Crystals Cmin with some glide, for moodiness
and finding good syncopations throughout the octaves.
Some nice grunge at low pitches
Happy etc :hyper:

Post

Sounds like you're making good use of Fathom.

Post

2018 - Best Developer - Seaweed Audio and Fathom
viewtopic.php?f=1&t=516894

A new Fathom appreciation thread/topic on KVR

Post

Oh my goodness, that is awesome!

Post

FathomSynth wrote: Thu Dec 20, 2018 3:38 pm Oh my goodness, that is awesome!
Well deserved!!! :clap: :clap: :clap:
Windows 10 and too many plugins

Post

FathomSynth wrote: Sat Dec 15, 2018 8:02 pm I'm also looking into CUDA Video Card GPU processing.
It's possible to use the NVIDIA GPU as massive parallel math processors.
Originally I was planning using Vector SIMD instructions in the main CPU like most synths,
but that only renders a CPU gain of x4 or x8 depending on the processor.
CUDA processing would be a CPU increase of over x32 or x64 depending on the video card.
Is CUDA widespread enough to really rely on it? Would all that complexity still work without CPU/GPU "outsoucing"? I know my workstation would benefit from CUDA ;-)

FathomSynth wrote: Sat Dec 15, 2018 8:02 pm The main advantage of CUDA GPU processing is it would enable real time rendering of the individual partials which would open true spectral morphing.

I'll be doing this first thing in 2019 January.
How to blend and manage the world of single (merged/split/intermodulating/chained) signalprocessors (signal chains) and the inherent lower level of (all) partials, amplitudes and phases that make up a signal(chain) in a some sort of manageable framework and workable user interface? Things may become very complex very soon. Not just from a technical perspective, but also from a usage/user perspective.

You seem to have arrived at some crossroad(s). Going modular "analog style" is really a good idea. There's a fast growing interest in "modular" (VCV Rack, Euro rack). Together with all sorts of "retro" emulations (hard- and software).

Going spectral would really distinguish Fathom from many competitors. But maybe going Kyma is too much.

Maybe first create the features/components to build/emulate our own "favorite" synths/configurations. The usual analog classics off course like the Roland Juno, SCI Prophet and many others. And digital (VA) classics like the Clavia Nord Lead. Or dare I say, the Nord Modular "in the box".

But also including some classics you can not easily (re)create in a "normal" synthesizer or even a (semi)modular environment, like the Kawai K5/5000, Casio CZ and Yamaha DX/TX series (there a stunningly little emulations of the legendary TX81Z).

Maybe add some electro mechanical features (Fender/Rhodes, Wurlitzer)....

You will need a bigger (free) matrix....

Quite some challenges. Quite curious as to where all this may lead to....

Post

CUDA GPU processing will work with any NVIDIA card.

For users on a Mac or using AMD cards, there is an equivalent technology such as Microsoft DirectCompute or OpenCL.

The only tricky part of using CUDA GPU is the latency in moving blocks from the CPU RAM to GPU. The problem would be if the latency is greater than 1.0 / (44100 Hz / 512 samples per block) = approx 10 milliseconds per block. Even in this case the problem can be solved by buffering the RAM moves and GPU functions in parallel.

The cool thing is that I would not have to GPU-ize all of Fathom's processor code, only the part that renders the waveform buffer from the sin partials, which is a perfect case for parallel instructions.

The difference in speed would be so drastic that Fathom's entire architecture could be changed to rendering the partials in real time. This opens up a universe of spectral processing such as adding noise functions to individual partials or ranges of partials.

The reason why I'm moving to CUDA instead of SIMD is that CPU SIMD (vector instructions) used by most fast synths provides a CPU multiply of at most x8 and in most cases x4. On the other hand GPU processing would render at least x32 and depending on the video card x128 or more, which would basically make Fathom the fastest synth on the planet by several factors.

In this case the difference in speed between Fathom and any other synth would be equivalent to playing Battlefield 5 on a computer with a GeForce RTX 2070 and playing on one with no video card.

Post

Hopefully, it will just work "as is" for those of us wothout Nvidia cards ;)
John Braner
http://johnbraner.bandcamp.com
http://www.soundclick.com/johnbraner
and all the major streaming/download sites.

Post

I'm not quite sure, but there are only a small number of plugin makers that switch to CUDA. Let me try to remember, I knew this free LE Reverb, ah, got it: https://www.liquidsonics.com/software/reverberate-le/
But that's about it. I think I've read somewhere that though the idea is intriguing it does not really work out. Could be wrong, programming CUDA is not my business... But I thing all other synths only off load the GUI rendering to the GPU, which is equally importand with all those fancy online animations. Check for instance out what Melda is saying: https://www.meldaproduction.com/technol ... gin-kernel
Quote: "Although the GPU itself can rarely be used for actual audio calculations due to its realtime nature and need for minimum latency, visualisation is a huge part of professional audio software"

Post

I don't plan to ever add any GPU to my DAW. Extra point of failure, expense, heat, noise, and power consumption, for very little gain.

Intel HD graphics 4 ever!!!

But I have a kicking GTX in my gaming rig, however... :D
Windows 10 and too many plugins

Post

Peter, Hey thanks so much for the Best of 2018 thread, that is really cool.

Yes, you're right, CUDA will depend completely on the latency.

The only other factors are setting up CUDA in Visual Studio C++ for a plugin DLL which is super easy. Also, converting the code which translates partials into waves into parallel instructions is not hard since repetitive tasks such as looping through an array of sin buffers and adding the results to a waveform buffer is a perfect example of code that should be parallel-ized.

So it all boils down to the CPU RAM to GPU latency. It is easy for me to imagine that some plugin developers have tried it but given up if the latency is greater than the time for one sample block at 44 kHz. But this could be because they are not considering a multi-threaded approach. The problem can be solved with a buffering approach and parallel operations performed each sample block. If latency L = N*T, where T is the time for each sample block then "N" threads would be needed.

Equally effective would be simply transferring 4 sample blocks worth of data every 4 sample blocks, thus cutting the latency problem by a factor of 4.

Also we are talking about a hardware motherboard buss between the CPU and GPU RAM so the latency can't be that bad.

The reason I am strongly suspicious that the latency problem can be solved is because GPU processing has to work for games and the processing resolution for games is at least 60 times per second which is very close to the 80 frames per second needed for audio processing at 44K with 512 samples per block.

Audio processing is exactly twice as demanding since the data needs to go from CPU to GPU and back to CPU, where as in Games it only has to go one way. But a factor of 2 is not a big deal. Especially since the processing resolution for audio does not need to be done every sample block, it could for instance be done every 4 sample blocks, and it would still be fine for modulation of partials etc.

Post

@FathomSynth

For the "2018 Thread" - It's simply what I feel. Fathom inspired one of my best/innovative tracks in 2018 and in my approach the Synth is "responsible" for the direction things take at least 50%. So I have to say thank you to you!

About CUDA - Didn't want to put you off the road to CUDA. It's just that many google hits explain why it's not the best option. On the other hand the links describe past things, and technology evolves. Anyway I read that you have a plan, which I read is like double buffering for screen animations... One last remark, I usually work on my projects 96Khz. Just mentioning because you mentioned 44.1KHz in your posts twice.

Post Reply

Return to “Instruments”