Neural Amp Modeler NAM amp capture/profile player FREE *December 14, 2023 update 0.7.7
-
- KVRer
- 2 posts since 29 Mar, 2023
Can someone shoot me an updated Discord invite? The one in the pinned post is expired already. Thanks!
- KVRer
- 7 posts since 24 Mar, 2023
It won't let me post a discord link here, but it's https://pastebin.com/yGCRmKsrgrav3serker wrote: ↑Wed Mar 29, 2023 6:21 pm Can someone shoot me an updated Discord invite? The one in the pinned post is expired already. Thanks!
-
- KVRer
- 2 posts since 29 Mar, 2023
AUTO-ADMIN: Non-MP3, WAV, OGG, SoundCloud, YouTube, Vimeo, Twitter and Facebook links in this post have been protected automatically. Once the member reaches 5 posts the links will function as normal.
Thanks a bunch!northern_fox wrote: ↑Wed Mar 29, 2023 6:42 pmIt won't let me post a discord link here, but it's https://pastebin.com/yGCRmKsr (https://pastebin.com/yGCRmKsr)grav3serker wrote: ↑Wed Mar 29, 2023 6:21 pm Can someone shoot me an updated Discord invite? The one in the pinned post is expired already. Thanks!
- KVRer
- 28 posts since 6 Nov, 2020
It depends on the profile. Most people are prioritizing quality over playback speed in their captures, though, so yes, they are generally CPU-intensive.
Mike Oliphant
github.com/mikeoliphant
github.com/mikeoliphant
- KVRAF
- Topic Starter
- 2608 posts since 23 Mar, 2005 from Detroit
The more epochs a profile is trained with, can give higher quality/accuracy to a certain point, which can be determined with the ESR value/graph generated during and after training. The lower the ESR value, the more accurate it is to the original signal. There reaches a point (i.e. a point of diminishing returns) in training where the ESR wont get any lower with a higher epoch count. Usually the more harmonic content an amp has, i.e a higher gain amp or pedal/amp combination, the more it can benefit from a higher number of epochs used, to be more accurate. *EDIT* See Ladron's post below The more eopochs means higher cpu usage in the captured model. There have been options added into the current trainer software version to give “feather” versions that train with less epochs for lighter/lower cpu usage, and a number of users making profiles are offering captures with both feather light versions (slightly lower accuracy/quality) and also higher epoch count for higher accuracy/quality *EDIT* at the expense of using more cpu.
Last edited by metalifuxx on Thu Apr 06, 2023 7:49 pm, edited 1 time in total.
- KVRer
- 28 posts since 6 Nov, 2020
Increasing the number of epochs makes training take longer, but has no impact on how CPU intensive the model is - that is determined by the network architecture. "feather" models are less complex, requiring less CPU to run, but also generally resulting in higher error rates.metalifuxx wrote: ↑Sat Apr 01, 2023 2:31 am The more eopochs means higher cpu usage in the captured model. There have been options added into the current trainer software version to give “feather” versions that train with less epochs for lighter/lower cpu usage
Mike Oliphant
github.com/mikeoliphant
github.com/mikeoliphant
- KVRAF
- 14991 posts since 26 Jun, 2006 from San Francisco Bay Area
I’m looking for a second story man and a demolition expert to break into Sweetwater after hours. We can capture the entire sales floor before they open.
Zerocrossing Media
4th Law of Robotics: When turning evil, display a red indicator light. ~[ ●_● ]~
4th Law of Robotics: When turning evil, display a red indicator light. ~[ ●_● ]~
- KVRAF
- Topic Starter
- 2608 posts since 23 Mar, 2005 from Detroit
From Scott Corgan:
Hey everyone!
It's been an awesome couple weeks building and beta testing ToneHunt. We've got about 80 models/packs already up there (and thats not even half of what we plan to have up there)!
For those that haven't heard of the ToneHunt initiative, it started as and continues to be an (unofficial) free, open, secure, and collaborative effort (by many people/mods from this FB group) to make NAM models easy to find and easy to share. It will always be free, always be open, always be secure, and always be collaborative (sharing values with NAM itself!).
We're looking for some more beta testers who love NAM and have models they want to make easily available for everyone. Shoot me a message if you're interested in participating with your models!
For others just looking to find models and share them around, you can sign up to be notified when it officially launches at
https://tonehunt.org
Hey everyone!
It's been an awesome couple weeks building and beta testing ToneHunt. We've got about 80 models/packs already up there (and thats not even half of what we plan to have up there)!
For those that haven't heard of the ToneHunt initiative, it started as and continues to be an (unofficial) free, open, secure, and collaborative effort (by many people/mods from this FB group) to make NAM models easy to find and easy to share. It will always be free, always be open, always be secure, and always be collaborative (sharing values with NAM itself!).
We're looking for some more beta testers who love NAM and have models they want to make easily available for everyone. Shoot me a message if you're interested in participating with your models!
For others just looking to find models and share them around, you can sign up to be notified when it officially launches at
https://tonehunt.org
You do not have the required permissions to view the files attached to this post.
- KVRist
- 245 posts since 22 Jun, 2020
Is it normal that when I load a .nam profile in Presonus Studio one, nam needs 35% of cpu?
That's dumb, most of the 'Tone' is in the IRs. I'm getting stupidly high CPU usage with it.It depends on the profile. Most people are prioritizing quality over playback speed in their captures, though, so yes, they are generally CPU-intensive.
Overkill on the epochs just for amp colourisation.
The smallest minority on earth is the individual.
~A.Rand
~A.Rand
- KVRer
- 28 posts since 6 Nov, 2020
As I said a few messages up, the number of epochs has nothing to do with CPU utilization at playback.
Mike Oliphant
github.com/mikeoliphant
github.com/mikeoliphant
- KVRer
- 7 posts since 24 Mar, 2023
MJACau wrote: ↑Tue Apr 04, 2023 8:36 amIs it normal that when I load a .nam profile in Presonus Studio one, nam needs 35% of cpu?That's dumb, most of the 'Tone' is in the IRs. I'm getting stupidly high CPU usage with it.It depends on the profile. Most people are prioritizing quality over playback speed in their captures, though, so yes, they are generally CPU-intensive.
Overkill on the epochs just for amp colourisation.
Like Mike said, epochs and CPU are unrelated - you can prove this to yourself pretty easily (see attached). What will affect the CPU is the model settings - feather, lite, standard, etc. The IR's provide the EQ curve for the tone, but have nothing to do with gain / response / dynamics / frequency response of the amp.
You do not have the required permissions to view the files attached to this post.
-
- KVRian
- 511 posts since 13 Jul, 2006
It's probably more related to the size of the neural network. Not sure if this is fixed, but it seems that way. The .nam files are all of the same size. If there were different sizes, I'd expect that NAM models with smaller file size also take less CPU
Find my (music) related software projects here: github.com/Fannon
- KVRer
- 7 posts since 24 Mar, 2023
It's not fixed - and you're 100% correct. If you look at feather, lite, standard models you'll see the sizes are varied from 80kb to 400kb, and the CPU usage scales accordingly. You can change the parameters for this if you use the code a bit more deeply than what's offered in the colab or GUI versions, which are designed to make things simpler for most users.
- KVRist
- 245 posts since 22 Jun, 2020
Colorisation isn't a monolithic it is "gain / response / dynamics / frequency response of the amp"The IR's provide the EQ curve for the tone, but have nothing to do with gain / response / dynamics / frequency response of the amp.
Anything that deviates from original signal is distortion aka. colorisation.
The smallest minority on earth is the individual.
~A.Rand
~A.Rand