This the era we're heading into. AI is starting to get used for these types of tasks. It's early days yet though, there will be much more to come.motomotomoto wrote: ↑Sat Feb 03, 2024 11:32 amI’m not aware of any plugins like this using any “ai” curve creation. As far as I know they are all pink noise curve based. Izotope and smart eq have profiles that are instrument trained but those are static eq based afaik.simon.a.billington wrote: ↑Sat Feb 03, 2024 12:04 amThis is pretty much as I understand it. But an AI would have studied thousands of examples of what am ideal, good sounding instrument should sound like and use that as a baseline in knowing what to adjust.motomotomoto wrote: ↑Tue Jan 30, 2024 4:21 am I’m willing to bet that “context aware” just means it’s dynamically adjusting to input frequencies. Watching the video it appears to be just like Gullfoss (or stabilizer, TEOTE or wavesfactory equalizer, take your pick of these plugins) BUT you can adjust the curve yourself with those sliders (which none of the others can do). This allows for it to be a powerful tone shaping tool and imo a worthy evolution of the concept if executed well.
But the user has the capability to intervene or completely veto the results to shape it according to the context inside an entire mix and their own personal needs.
My guess is they are also leveraging the Machine Learning chips built into our hardware. If you can recognise a face, you can recognise an instrument.