Ah yeah should have been 2^n. Thanks for the catch.
Smooth approximations of 2^x

mtytel
 KVRist
 85 posts since 28 Jan, 2013 from Oakland

mtytel
 KVRist
 85 posts since 28 Jan, 2013 from Oakland
Re: Smooth approximations of 2^x
Smoothness of a function is about the continuity of derivatives of a function.
This approximation method is piecewise at every x integer. If you just linearly interpolate between the powers of two you'll end up with discontinuities in the first derivative at each integer. If you use the 2nd degree polynomial in the PDF, it makes the first derivative continuous but the second derivative is then discontinuous.
Your approximation has a (very slight) discontinuity in the first derivative (and subsequent derivatives) when you convert from the end of the window to the beginning (multiplied by two).
I think you can just add a tick after your function name to evaluate derivatives in desmos to see.

juha_p
 KVRian
 572 posts since 21 Feb, 2006 from FI
Re: Smooth approximations of 2^x
Does this mean the error situation on each end (equal error is best option?)? I prepared another approximation (same technique as earlier) which has about equal error at 0.5 and 0.5 but, dunno if this is the way to go.mtytel wrote: ↑Tue Feb 19, 2019 10:15 pm
Smoothness of a function is about the continuity of derivatives of a function.
...
Your approximation has a (very slight) discontinuity in the first derivative (and subsequent derivatives) when you convert from the end of the window to the beginning (multiplied by two).
I think you can just add a tick after your function name to evaluate derivatives in desmos to see.
Code: Select all
double my_pow2x(double x){
double p = ((((0.00004190627326439316*x +
0.0006011232071797441)*x +
0.006938013800124066)*x +
0.06005662673708935)*x +
0.3465735902799904)*x +
1.0;
return p*p;
}
Code: Select all
absolute error
x std::pow(2,x) my_pow2x z1202_pow2x
0.500 0.7071067812 7.535280e08 3.893469e06
0.475 0.7194667900 5.650777e08 3.690597e06
...
+0.450 1.3660402568 3.837408e08 8.234943e06
+0.475 1.3899182198 5.425599e08 8.073019e06
+0.500 1.4142135624 7.535280e08 7.786937e06

mtytel
 KVRist
 85 posts since 28 Jan, 2013 from Oakland
Re: Smooth approximations of 2^x
Here's an illustration on the smoothness difference:
https://www.desmos.com/calculator/3p8glvtjjk
Notice how the PDF's polynomial function and derivatives stay continuous through the piecewise transition.
Your function has a discontinuity in it when you piecewise it together. Even if you shift your polynomial to remove the discontinuity you'll have discontinuities in the first derivative.
Edit:
Though if you only want to evaluate on the 0.5 to 0.5 window your polynomial is better.
https://www.desmos.com/calculator/3p8glvtjjk
Notice how the PDF's polynomial function and derivatives stay continuous through the piecewise transition.
Your function has a discontinuity in it when you piecewise it together. Even if you shift your polynomial to remove the discontinuity you'll have discontinuities in the first derivative.
Edit:
Though if you only want to evaluate on the 0.5 to 0.5 window your polynomial is better.

juha_p
 KVRian
 572 posts since 21 Feb, 2006 from FI
Re: Smooth approximations of 2^x
Thanks, especially the desmos example.
This is new subject for me so I wonder(ed) why less accuracy works better in this.
This is new subject for me so I wonder(ed) why less accuracy works better in this.

andycytomic
 KVRAF
 2271 posts since 3 Dec, 2008
Re: Smooth approximations of 2^x
Thanks for posting about this Vadim. I really like the endpoint derivative matching criteria, but not specifying what the derivative has to be explicitly, great idea, and I hope I can apply this to other situationsZ1202 wrote: ↑Fri Feb 08, 2019 5:39 amFollowing the neighbor thread viewtopic.php?f=33&t=519728
I thought I post some approximations of 2^x which I did a couple of years ago. The key feature of these approximations are that they are smooth at the seams, which makes them particularly suitable for quite a few music DSP needs even at a relatively low precision, which means saving some CPU.
I post the results in the form that I have them already (PDF). My apologies for the small inconvenience for having to download and unpack the file.
I agree that matching the derivative at the endpoints is important, but in most listening tests I've done matching over C2 (2nd derivative) didn't make much difference. With this in mind up to 3rd order approximations you've done are very optimal, apart from perhaps some dc offset to make the error more symmetric (which you pointed out). But for 4th order and above I think it's good to make a few changes.
in the following g(x) is the nth order poly, and f(x) = 2^x
for 4th order I've found this system of equations keeps the error down a little, and is more symmetric in both positive and negative relative errors, but still not too much in it from what you've presented:
g(17/1000) == f(17/1000)
g(1/2) == f(1/2)
g(1/2) == 2 g(1/2)
g'(1/2) == 2 g'(1/2)
g''(1/2) == 2 g''(1/2)
error + 0.064 cents (vs 0.08 cents, so not much better)
g(x) = 0.999996375 + 0.692932426 x + 0.24015855 x^2 + 0.05669742 x^3 + 0.00998657 x^4
but for 5th order you have two degrees of freedom so if you forgo the 3rd and 4th derivative matches you can get the error down a lot:
g(194/1000) == f(194/1000)
g(129/1000) == f(129/1000)
g(1/2) == f(1/2)
g(1/2) == 2 g(1/2)
g'(1/2) == 2 g'(1/2)
g''(1/2) == 2 g''(1/2)
error + 0.00043 cents (vs 0.01 cents, so 23 times better!)
g(x) = 1.000000237 + 0.69314655 x + 0.24021519 x^2 + 0.05550965 x^3 + 0.00969821 x^4 + 0.00132508 x^5
Thanks again for sharing your ideas, really appreciated.
The Glue, The Drop  www.cytomic.com

Z1202
 KVRian
 1147 posts since 12 Apr, 2002
Re: Smooth approximations of 2^x
Hmmm, IIRC I was comparing my approximations to minimax and the error was always just marginally worse. But I don't really remember, maybe I didn't compare up to the 5th order, or maybe I did some mistakes. Thanks for digging further!andycytomic wrote: ↑Tue Feb 25, 2020 4:50 ambut for 5th order you have two degrees of freedom so if you forgo the 3rd and 4th derivative matches you can get the error down a lot:
...........
error + 0.00043 cents (vs 0.01 cents, so 23 times better!)

andycytomic
 KVRAF
 2271 posts since 3 Dec, 2008
Re: Smooth approximations of 2^x
It only really starts kicking in at 5th order or above, which your method of matching derivatives is still being applied for great results, but just up to 2nd derivative. Using Estrin's scheme instead of Horner evaluation pretty much means that powers of 2 wins in terms of most efficient evaluation, and 6 isn't a power of two, but 8 is, so I end up using 7th order polynomials and keep the error down to +0.0000007 cent with almost unnoticeable cpu difference.
https://en.wikipedia.org/wiki/Estrin%27s_scheme
The Glue, The Drop  www.cytomic.com

KBonzo
 KVRer
 16 posts since 21 Feb, 2019
Re: Smooth approximations of 2^x
Bitshifting tricks are interesting but with memory being cheap is it not better to use a table/interpolation and avoid undefined behavior?

andycytomic
 KVRAF
 2271 posts since 3 Dec, 2008
Re: Smooth approximations of 2^x
Sure memory is cheap, but cache misses are expensive. There are intrinsics to do all the "tricks", so there is no undefined behaviour, and you can do them on a f32x4 or higher width SIMD as well, so you get at least 4 lots of 2^x for the price of 1 using the same small number of coefficients for all of them, so it stays in the cache easily.
The best way to know for sure which is quicker is to test both methods. In my testing using polynomial approximations, or even rational approximations ends up being quicker than table based approaches, but I still use tables for precomputing complicated shapes.
The Glue, The Drop  www.cytomic.com

Z1202
 KVRian
 1147 posts since 12 Apr, 2002
Re: Smooth approximations of 2^x
The question is though, whether at the error of 7e7 cent the continuity of derivatives is audible at all (I'd guess not). Also it seems that it's also beyond the 32bit FP precision anyway. Therefore I'd consider simply using minimax in that rangeandycytomic wrote: ↑Tue Feb 25, 2020 7:32 amIt only really starts kicking in at 5th order or above, which your method of matching derivatives is still being applied for great results, but just up to 2nd derivative. Using Estrin's scheme instead of Horner evaluation pretty much means that powers of 2 wins in terms of most efficient evaluation, and 6 isn't a power of two, but 8 is, so I end up using 7th order polynomials and keep the error down to +0.0000007 cent with almost unnoticeable cpu difference.

mystran
 KVRAF
 5607 posts since 12 Feb, 2006 from Helsinki, Finland
Re: Smooth approximations of 2^x
Yeah, for 1/octave singleprecision control signal, with zeropoint somewhere around the middle of the hearing range, the precision 45 octaves up or down (which is about the audible range) gets you about 0.0005 cents worth of accuracy before you even exponentiate. That said, I would argue you probably don't need anywhere NEAR that good accuracy for typical tuning purposes. I used to have a table based approximation with about 0.02 cents and never noticed any issues (even before I eventually added lerps to the thing).Z1202 wrote: ↑Tue Feb 25, 2020 10:41 pmThe question is though, whether at the error of 7e7 cent the continuity of derivatives is audible at all (I'd guess not). Also it seems that it's also beyond the 32bit FP precision anyway. Therefore I'd consider simply using minimax in that rangeandycytomic wrote: ↑Tue Feb 25, 2020 7:32 amIt only really starts kicking in at 5th order or above, which your method of matching derivatives is still being applied for great results, but just up to 2nd derivative. Using Estrin's scheme instead of Horner evaluation pretty much means that powers of 2 wins in terms of most efficient evaluation, and 6 isn't a power of two, but 8 is, so I end up using 7th order polynomials and keep the error down to +0.0000007 cent with almost unnoticeable cpu difference.
That said, the approximation I've been using lately is 2DaT's exp_mp (this one I think: viewtopic.php?p=7161124#p7161124) which is about 3(?) ulps for single precision.
Please just let me die.

Z1202
 KVRian
 1147 posts since 12 Apr, 2002
Re: Smooth approximations of 2^x
The potentially sensitive areas would be detuned oscillator beating and converting audiorate modulation signals, where subtle spectral differences can become audible. I'm not sure what's the required precision range for that off the top of my head.

mystran
 KVRAF
 5607 posts since 12 Feb, 2006 from Helsinki, Finland
Re: Smooth approximations of 2^x
Beating of detuned oscillators (at higher frequencies), sure. Whether spectral differences would be noticeable is a trickier question, because it probably depends on how clean the rest of the signal path is.
Either way, in practice it's usually cheap enough to come within a few ulps of single precision that there probably isn't any good reason not to do so. I'm somewhat skeptical on whether continuity really matters at that point though (and for something like NewtonRaphson that requires continuity, it's probably faster to just improve the approximation until you can assume dexp(x)/dx=exp(x) and skip a separate evaluation for the derivative).
Please just let me die.

andycytomic
 KVRAF
 2271 posts since 3 Dec, 2008
Re: Smooth approximations of 2^x
Well these are the coefficients for the linked approximation for 2^x in the range 1/2 to 1/2 (shown orange):mystran wrote: ↑Wed Feb 26, 2020 2:24 amYeah, for 1/octave singleprecision control signal, with zeropoint somewhere around the middle of the hearing range, the precision 45 octaves up or down (which is about the audible range) gets you about 0.0005 cents worth of accuracy before you even exponentiate. That said, I would argue you probably don't need anywhere NEAR that good accuracy for typical tuning purposes. I used to have a table based approximation with about 0.02 cents and never noticed any issues (even before I eventually added lerps to the thing).
That said, the approximation I've been using lately is 2DaT's exp_mp (this one I think: viewtopic.php?p=7161124#p7161124) which is about 3(?) ulps for single precision.
{1.000000119000000, 0.6931469440000000, 0.2402212024000000, 0.05550713092000000, 0.009675540961000000, 0.001327647245000000}
The above isn't even C0. If you want something with a lower error try this one which is at least C0 continuous, and has a more symmetric error (shown blue):
{1.000000088673463, 0.69314693211407, 0.24022037362574, 0.0555072548370, 0.0096798351988, 0.0013285658116}
relative error plots of the two:
edit: and here are the equations so anyone can tweak it
Code: Select all
g[x_] = a0 + a1 x + a2 x^2 + a3 x^3 + a4 x^4 + a5 x^5;
f[x_] = 2^x;
Solve[{g[1/2] == f[1/2], g[1/2] == 2 g[1/2],
g[378/1000] == f[378/1000], g[157/1000] == f[157/1000],
g[110/1000] == f[110/1000], g[354/1000] == f[354/1000]},
{a0, a1, a2, a3, a4, a5}];
You do not have the required permissions to view the files attached to this post.
Last edited by andycytomic on Wed Feb 26, 2020 11:13 pm, edited 3 times in total.
The Glue, The Drop  www.cytomic.com