I think the issue here is what AI can do vs what an amp will do. They seem to give 5-10 settings on an amp if you look at the "amps VS tones" in the listed versions, and your point is good, but what exactly happens when AI decides the in between stages of an EQ? I mean how accurate will it be if it's under the various kinds of distortion and simulated cabinets/reverb etc. (which is 100% what this is all about from what I can tell) that can really change with just a minor tweak? IMO a good solution would be to do exactly what you want, but literally let people know on every dial where the original captures are, that would be cool.jamcat wrote: ↑Thu Aug 11, 2022 5:40 am So whether I'm playing a '50s Fender or a '70s Marshall model, I want it to be 100% accurate AND 100% tweakable so I can dial in MY OWN tone, not just be limited to calling up a preset of someone else's tone.
AmpliTube 5 gives us lots of gear that's fully tweakable, while TONEX promises to deliver lots of tones that are fully accurate. But what we need is models that do both.
I think it's cool that soon we'll be able to capture really accurate snapshots of our own gear, but that's only the tip of the iceberg of what machine learning can do. I'm just hoping that IK doesn't stop there with TONEX, because the old AmpliTube models are not going to be able to stand up against the AI Tone Models when it comes to realism.
Mostly a list of cabinets would be great, and if you're modeling even a Ratt it's going to take 100 tones on it's own at least. Seriously.