Looks like the first application its just our traditional graphics usage, the second one is a very interesting project for parallel computing!THE INTRANCER wrote: ↑Sat Mar 02, 2019 2:51 amThey were utilised in 2004 for the Spirit Rover on Mars and NASA's GPU cluster at the NASA Center for Climate Simulation (NCCS)logifuzz-vst-plugins wrote: ↑Sat Mar 02, 2019 12:19 am Beyond audio and the usual graphics/video/crypto mining, are anyone putting their GPU cores or extra memory to some other use?
https://www.geek.com/games/nvidia-techn ... rs-555177/
https://sciences.gsfc.nasa.gov/600/high ... /gpus.html
GPU Driven | The Future Of DAW Processing Or A Dream ?
-
logifuzz-vst-plugins logifuzz-vst-plugins https://www.kvraudio.com/forum/memberlist.php?mode=viewprofile&u=434377
- KVRist
- 160 posts since 25 Jan, 2019 from Brazil
"I used to think I was indecisive, but now I'm not too sure."
Checkout my blog: VST Plugins Free Download.
Checkout my blog: VST Plugins Free Download.
- Banned
- 2288 posts since 24 Mar, 2015 from Toronto, Canada
I don't personally program it. They use R and Matlab. I just work on the infrastructure.logifuzz-vst-plugins wrote: ↑Sat Mar 02, 2019 10:41 pmInteresting project, mind me asking what programming language you guys are using to do the GPU parallelism? This would be a proper academic project if the statistical task can be benefited from multi process.telecode wrote: ↑Sat Mar 02, 2019 12:49 pmAt my work we have a POC for trying to use GPU computing for statistical data analysis. Basically, some of the jobs take 2 or 3 days to complete, so we are trying to see if there is a way to use GPU computing to see if it can reduce it to less time to complete. It hasn't really taken off yet due to too much other stuff that needs to get done and not enough human resources available to devote a person to try to get it to work properly.logifuzz-vst-plugins wrote: ↑Sat Mar 02, 2019 12:19 am Beyond audio and the usual graphics/video/crypto mining, are anyone putting their GPU cores or extra memory to some other use?
http://www.r-tutor.com/gpu-computing
https://www.mathworks.com/solutions/gpu-computing.html
Personally, I really don't know where GPU computing will go. Its all stuff that is being hyped by companies that what to sell hardware, but the days of buying big honking expensive hardware and then dealing with throwing it away in recycling after 7 years are coming to an end with the likes of AWS and Google cloud solutions.
Spotify Soundcloud Soundclick
Gear & Setup: Windows 10, Dual Xeon, 32GB RAM, Cubase 10.5/9.5, NI Komplete Audio 6, NI Maschine, NI Jam, NI Kontakt
Gear & Setup: Windows 10, Dual Xeon, 32GB RAM, Cubase 10.5/9.5, NI Komplete Audio 6, NI Maschine, NI Jam, NI Kontakt
-
- KVRist
- 404 posts since 12 Jan, 2008 from Sweden
It is different. UAD cards use dedicated dsp chips for audio processing.PeterP_swe wrote: ↑Fri Mar 01, 2019 9:37 am GPU processing isn't that different from what UAD is doing with their hardware and plugins, so I can see it work.
-
- KVRAF
- Topic Starter
- 3251 posts since 30 Dec, 2014
Around 10 years ago, Nvidia posted this PDF explaining it... it's quite colourful.
https://www.nvidia.com/content/GTC/docu ... _GTC09.pdf
https://www.nvidia.com/content/GTC/docu ... _GTC09.pdf
KVR S1-Thread | The Intrancersonic-Design Source > Program Resource | Studio One Resource | Music Gallery | 2D / 3D Sci-fi Art | GUI Projects | Animations | Photography | Film Docs | 80's Cartoons | Games | Music Hardware |
-
- KVRAF
- 3979 posts since 20 Feb, 2004
Yeah, whatever happened to CUDA? Seemed like a huge deal with a lot of potential (and Nebula was using it IIRC) and now, nothing.THE INTRANCER wrote: ↑Sun Mar 03, 2019 4:47 pm Around 10 years ago, Nvidia posted this PDF explaining it... it's quite colourful.
https://www.nvidia.com/content/GTC/docu ... _GTC09.pdf
Edit: After reading the thread, I'm guessing it had to do with latency.
A well-behaved signature.
-
- KVRist
- 126 posts since 13 Aug, 2017 from Gothenburg
In what way is it different?tenshin111 wrote: ↑Sun Mar 03, 2019 3:24 pmIt is different. UAD cards use dedicated dsp chips for audio processing.PeterP_swe wrote: ↑Fri Mar 01, 2019 9:37 am GPU processing isn't that different from what UAD is doing with their hardware and plugins, so I can see it work.
There's nothing on their website* about specific audio extensions and I can't think of what special asm instructions you'd add just for audio. I'm genuinely interested if you have more information and I would love to read about it.
*) https://www.analog.com/en/products/proc ... sharc.html#
-
- KVRian
- 1265 posts since 9 Sep, 2005 from Oulu, Finland
UAD can design their whole hardware specifically for audio needs. The DSP chips themselves don't necessarily have to be audio specific, it suffices that they have good enough floating point and memory access performance. GPUs are designed foremost for doing video/graphics that will be outputted into a display device directly connected to the GPU. They are not great for doing heavy 2-way I/O between the GPU and the host system in real time, which audio will typically need.
-
- KVRist
- 126 posts since 13 Aug, 2017 from Gothenburg
Sure, every time you want data back from the GPU we're talking about 10-20 ms * in my experience, which is significant for real-time usage, but if you're in the mixing/mastering stage then it's usable.Xenakios wrote: ↑Mon Mar 04, 2019 12:15 pmUAD can design their whole hardware specifically for audio needs. The DSP chips themselves don't necessarily have to be audio specific, it suffices that they have good enough floating point and memory access performance. GPUs are designed foremost for doing video/graphics that will be outputted into a display device directly connected to the GPU. They are not great for doing heavy 2-way I/O between the GPU and the host system in real time, which audio will typically need.
And it's not per plugin, so you could in theory max out the GPU with a ton of plugins in the same batch without increasing it.
GPUs have been used to do physics, encoding and all sorts of calculations for a while now. I definitely think it's useful outside of graphics.
*) Those numbers are a few years old and may not have been optimal then, but I don't expect that to have changed drastically.
-
- KVRer
- 2 posts since 24 Mar, 2019
OpenCL is basically deprecated across the board in favor of more modern graphics apis that include compute shaders - so in the case of Apple you would use Metal compute shaders, or on linux or windows you could use Vulkan instead of OpenCL. So it's not like GPGPU is going away, instead it's been rolled into unified GPU apis (other than CUDA I suppose)Hanz Meyzer wrote: ↑Sat Mar 02, 2019 9:05 am It is possible with OpenCL. There are working examples on github. But now Apple deprecated OpenCL, because they suck. It was not old at all.
-
- KVRer
- 2 posts since 24 Mar, 2019
in my testing with images with new hardware (RTX 2070) I am seeing upload and download times of 1080p YUV 4:2:0 buffers (so roughly 3MB) around 0.7-0.8ms ... looking at the numbers actually I may have something weird going on in my measurement so I will come back and edit this post in a bit after i investigatePeterP_swe wrote: ↑Mon Mar 04, 2019 1:07 pmSure, every time you want data back from the GPU we're talking about 10-20 ms * in my experience, which is significant for real-time usage, but if you're in the mixing/mastering stage then it's usable.Xenakios wrote: ↑Mon Mar 04, 2019 12:15 pmUAD can design their whole hardware specifically for audio needs. The DSP chips themselves don't necessarily have to be audio specific, it suffices that they have good enough floating point and memory access performance. GPUs are designed foremost for doing video/graphics that will be outputted into a display device directly connected to the GPU. They are not great for doing heavy 2-way I/O between the GPU and the host system in real time, which audio will typically need.
And it's not per plugin, so you could in theory max out the GPU with a ton of plugins in the same batch without increasing it.
GPUs have been used to do physics, encoding and all sorts of calculations for a while now. I definitely think it's useful outside of graphics.
*) Those numbers are a few years old and may not have been optimal then, but I don't expect that to have changed drastically.
-
- KVRAF
- 4051 posts since 2 Jul, 2005
I’ve found that up to 256 samples is damn near unnoticeable for most people through their headphones at 48k. I’ve tested lots of different setups (just to see how far it can be pushed before anyone says anything or I notice a change in the performance) through the years with folks in and out of my studio. I can’t deal with much more than 128 for playing any keys that need accuracy myself, but i know it’s there. If you don’t tell people they don’t tend to notice much. I would be amazed if a 2 ms difference would be anything to anyone used to digital equipment if you didn’t mention it.EvilDragon wrote: ↑Thu Feb 28, 2019 3:01 pmNot OK if you want to monitor with FX while recording live.mcbpete wrote: ↑Thu Feb 28, 2019 2:30 pmSounds OK to me - Apparently that's the same time it takes from a drummer hitting their snare to the sound hitting their eardrumsEvilDragon wrote: ↑Thu Feb 28, 2019 1:10 pm So 1 ms latency at 96k. That's 2 ms at 48k. I wouldn't find that acceptable
Don't F**K with Mr. Zero.
- KVRian
- 665 posts since 1 Jan, 2018
Liquidsonics made a couple free ones using CUDA, but it doesn't look like that made its way into any of their commercial products:
https://www.liquidsonics.com/software/reverberate-le/
https://www.liquidsonics.com/software/filtrate-le/
https://www.liquidsonics.com/software/reverberate-le/
https://www.liquidsonics.com/software/filtrate-le/
- KVRAF
- 2183 posts since 10 Jul, 2006 from Tampa
I was just about to mention them when I saw your post. I've been following this topic for about 6 years now, and other than their free stuff, LiquidSonics doesn't seem to have used CUDA anywhere else. (And neither has anyone else, from what I've seen.) Reverberate LE has a built-in latency of at least 512 samples, so a few more milliseconds of latency isn't really going to matter much.cthonophonic wrote: ↑Sat Mar 30, 2019 1:49 am Liquidsonics made a couple free ones using CUDA, but it doesn't look like that made its way into any of their commercial products:
https://www.liquidsonics.com/software/reverberate-le/
https://www.liquidsonics.com/software/filtrate-le/
According to their website, the zero latency CPU edition can be delayed up to 8192 samples, but otherwise offers "zero" latency. But the NVIDIA CUDA GPU edition can't do less than 512 samples of delay (up to 8192:
"Zero latency CPU edition (or up to 8192 samples for lower CPU usage)
Low latency GPU edition (512 – 8192 samples delay)"
I guess they determined that implementing CUDA processing in their commercial products wasn't worth the performance hit. They didn't even upgrade the CUDA-based, "LE" versions of their plug-ins to 64-bit, which is another sign that they don't think there's a future in CUDA processing for plug-ins.
It's an interesting concept that keeps raring its head on forums every so often. But from everything I've read about it, it will never be useful for plug-in processing or anything that's time-sensitive, apparently. While some may moan about it, I'm just glad I don't have to worry about getting a powerful graphics card now, just to run more plug-ins.
Steve
Here's some of my stuff: https://soundcloud.com/shadowsoflife. If you hear something you like, I'm looking for collaborators.
- KVRAF
- 23101 posts since 7 Jan, 2009 from Croatia
It's not enough when multitracking live vocals or guitars, for example. You need the tightest possible latency then.Ah_Dziz wrote: ↑Sat Mar 30, 2019 1:04 amI’ve found that up to 256 samples is damn near unnoticeable for most people through their headphones at 48k. I’ve tested lots of different setups (just to see how far it can be pushed before anyone says anything or I notice a change in the performance) through the years with folks in and out of my studio. I can’t deal with much more than 128 for playing any keys that need accuracy myself, but i know it’s there. If you don’t tell people they don’t tend to notice much. I would be amazed if a 2 ms difference would be anything to anyone used to digital equipment if you didn’t mention it.EvilDragon wrote: ↑Thu Feb 28, 2019 3:01 pmNot OK if you want to monitor with FX while recording live.mcbpete wrote: ↑Thu Feb 28, 2019 2:30 pmSounds OK to me - Apparently that's the same time it takes from a drummer hitting their snare to the sound hitting their eardrumsEvilDragon wrote: ↑Thu Feb 28, 2019 1:10 pm So 1 ms latency at 96k. That's 2 ms at 48k. I wouldn't find that acceptable