Improvements!

Official support for: u-he.com
RELATED
PRODUCTS

Post

The problem is they can only do it for very specific algorithms and those algorithms are exactly the ones that U-he doesn't use. Notice that I have no bad intentions, just trying to inform you.

Post

Yeah, I hear a lot about convolution on the GPU, but I haven't heard much or anything about anything else. Machine Learning is a different matter, but there again it isn't obvious to me how to tap into those new processors for realtime applications.

Post

DSP is by and large mostly a serial process (you need one or more inputs to do an operation to send it to another block's input(s) that does its own thing etc). This does not scale well for GPUs, which are highly parallel processors (for calculating a whole bunch of pixels simultaneously). This is why stuff like FFT/convolution are good for GPU. But, not much else, not without huge penalties.

Post

EvilDragon wrote: Wed Mar 30, 2022 7:37 am DSP is by and large mostly a serial process (you need one or more inputs to do an operation to send it to another block's input(s) that does its own thing etc). This does not scale well for GPUs, which are highly parallel processors (for calculating a whole bunch of pixels simultaneously). This is why stuff like FFT/convolution are good for GPU. But, not much else, not without huge penalties.
I suppose GPU processing would then be interesting for the plugins by Acustica and convolution reverbs in general. ?

Since there are EQs based on convolution, would convolution therefore be applicable for synthesizer filters?

Either way, if the a certain percentage of plugins will be on the GPU in the future, then there's more CPU power for those plugins that are not suitable. That's OK. It wouldn't be in improvement if now 100% would be calculated on the GPU and the CPU would be unused. That's hypothetical, of course. Point being that you wouldn't want all plugins on the GPU anyway. But it would be nice to have a significant performance leap with the computer that you have anyway, or the option to buy one with more GPU cores.

Post

Convolution is a rather "static" process, it would be extremely intensive if one tries to automate a convolution based filter (like your basic filter envelope...). Not to mention that what Acustica is doing is a way, way, way more intensive version of convolution (Volterra integral equations/kernels). I don't see it happening.

Post

EvilDragon wrote: Wed Mar 30, 2022 7:37 am DSP is by and large mostly a serial process
Multichannel DSP is very parallel process
Murderous duck!

Post

Unless you want to emulate channel crosstalk and so on. :)

Post

EvilDragon wrote: Mon Oct 03, 2022 9:13 am Unless you want to emulate channel crosstalk and so on. :)
Computing = limitations.
Crosstalk is not emulation you're looking for.
Murderous duck!

Post

I think I might have asked for this already, but no harm in requesting again :). I would like to see Mid EQ Band Q and Distortion Output Gain in Hive, available in the Mod Matrix. Thanks!
Always Read the Manual!

Post

PieBerger wrote: Tue Jan 10, 2023 10:56 am I think I might have asked for this already, but no harm in requesting again :). I would like to see Mid EQ Band Q and Distortion Output Gain in Hive, available in the Mod Matrix. Thanks!
Isn't there an eq section in the effects?
all the best

Post

niterateaudio wrote: Tue Jan 10, 2023 11:18 am
PieBerger wrote: Tue Jan 10, 2023 10:56 am I think I might have asked for this already, but no harm in requesting again :). I would like to see Mid EQ Band Q and Distortion Output Gain in Hive, available in the Mod Matrix. Thanks!
Isn't there an eq section in the effects?
There is but the Q factor for each band is fixed, I would like the Q for the Mid Band *to be made variable and* available as a so-called 'Hidden Parameter' that can be accessed via the Mod Matrix.

*edit
Always Read the Manual!

Post

Presskwerk: can you make it possibile to set the HQ by default, please? It would be great to open the vst without checking/clicking on it every time. And/or maybe, as in Diva, set an offline rendering in HQ.
(Totally in love with your instruments and FXs btw!)

Post

Just read the last few posts. Concerning GPU for audio processing I'm asking myself whether somebody already tackled the topic of physical modelling using FDTD methods. This is a good candidate for parallel computation. Also it offers the potential of allowing for different (and more complex) sonic results than 'traditional' modal synthesis can offer.

Also even for cross-talk between voices GPU might be used for quickly applying some weight matrix, couldn't it?

Please correct me if I'm wrong! ;-)

Post

u-u-u wrote: Tue May 09, 2023 7:04 pm Just read the last few posts. Concerning GPU for audio processing I'm asking myself whether somebody already tackled the topic of physical modelling using FDTD methods. This is a good candidate for parallel computation. Also it offers the potential of allowing for different (and more complex) sonic results than 'traditional' modal synthesis can offer.

Also even for cross-talk between voices GPU might be used for quickly applying some weight matrix, couldn't it?

Please correct me if I'm wrong! ;-)
Did you have a look at the NESS project? http://www.ness.music.ed.ac.uk/

Though I would argue modal synthesis has not yet reached its full potential IMO.

Post

v1md wrote: Tue May 09, 2023 7:44 pm
u-u-u wrote: Tue May 09, 2023 7:04 pm Just read the last few posts. Concerning GPU for audio processing I'm asking myself whether somebody already tackled the topic of physical modelling using FDTD methods. This is a good candidate for parallel computation. Also it offers the potential of allowing for different (and more complex) sonic results than 'traditional' modal synthesis can offer.
[...]
Did you have a look at the NESS project? http://www.ness.music.ed.ac.uk/
Yeah, thought about NESS and Physical Audio (which appears to be a spin-off of NESS). Sometimes when hearing sound demos and/or playing around with those Physical Audio plugins I have the feeling that it is pretty hard to get something working in real-time at all so that there is some lacking of sonic fine-tuning of algorithms. I mean people breaking out of the pure mathematical rule set of FDTD and applying additional DSP trickery to achieve more pleasing sonic results. I could imagine that a framework like GPU Audio could be a good basis for some sort of FDTD sound synthesis playground to further explore and modify existing FDTD methods without having to think too much about 'getting it to work at all'.
v1md wrote: Tue May 09, 2023 7:44 pm Though I would argue modal synthesis has not yet reached its full potential IMO.
In which direction do you think here?

Post Reply

Return to “u-he”