camsr wrote:Another thing that accelerates that breakdown is using a naive method inside of another.
Hmmm, if possible, could you bring an example of that? Off the top of my head I can think of only two reasons: quantization error, or the method does not purely discretize time (which in some cases could be even a bug in the algorithm). Otherwise, this would mean that the algorithm handles low frequency signals worse than high frequency signals.
aciddose wrote:I'd say it does matter to the end user when the end user is asking about sample rates. This is because the hardware is using over-sampling ("higher sample rates") and specialty hardware to produce the result it does as efficiently as it does.
Okay, probably you could see it that way: a "cheap" hardware ADC/DAC is using oversampling is digital domain to improve the quality. But then it's one more demonstration of a process, the quality of which increases with sample rate
mystran wrote:For something like an EQ in pure LTI operation, it's computationally cheaper really to just deal with the Nyquist problems directly, rather than oversample, since a fairly short (as in: less than one branch of an oversampling kernel) special purpose FIR fit can do wonders (not to mention you can get nice "analog" phase with less latency than what you'd need for linear-phase oversampling). The downsides are that coefficient updates now become more expensive as you have to redo the FIR fitting (although if you do it right, you won't need a particularly expensive fitting process) and the results in time-varying operation won't be very meaningful.
Regardless of how close to perfect the 44kHz version might be, I'd guess that the quality will still increase, if only marginally, at higher sample rates?