I have to do a couple-day session with a SmartGPT connected to a present-day multi-API-search engine on this to bullseye SOTA audio processing. SmartGPT may even offer "nucleus sampling" instances where we discover that a major corp is applying signal processing in other....unlikely... applications that can then be effectively assimilated for our future web213.0, WiFi-7 (whatever) online AI music DAW.BertKoor wrote: ↑Wed May 10, 2023 6:48 am Some random thoughts...
Serverless / FaaS needs some proper thinking about the overall architecture.
I guess it's OK if the AI aspect sits in a serverless environment. Give it a prompt, it thinks for a while and it gives back an answer.
But the data flow of a traditional DAW, it's bonkers to put that on the cloud as nano-services.
I have a theory where we can apply an existing prediction AI to our audio buffering scenario, where latency and audio interpolation are successfully solved. Something about orthogonal signals in microwave towers and Fourier theory. Macro--->micro.
Adopting Ross Perot's maxim, we can look to existing online music services, albeit not as AI as we will be, such as audio tool, Soundclouds stellar buffering/disseminating practices being practically located in Berlin and other more songwriting-centric online audio processing services not solely techno. (Human vocal producers.)
An aside: WebGPU could help in the latency dept. (or not) but I want, otherwise, unprecedented audio interpolation for all and well within a budget model.
We'd have to be the WebGPU poster child for that, if so.