Smooth approximations of 2^x

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

Z1202 wrote: Tue Feb 19, 2019 9:19 am
mtytel wrote: Mon Feb 18, 2019 8:08 pm This time you keep the derivatives of the function evaluated at 2 half of the derivatives of the function evaluated at 1.
f'(1) = 2f'(2)
I got f^(n)(1) = 2^n f^(n)(2), probably that's the reason
Ah yeah should have been 2^n. Thanks for the catch.

Post

juha_p wrote: Tue Feb 19, 2019 8:41 pm ... but, how do I see/know if it's "smooth" or "smoother" than the one found in your linked PDF (equal degree approximation plotted as comparison)?
Smoothness of a function is about the continuity of derivatives of a function.

This approximation method is piecewise at every x integer. If you just linearly interpolate between the powers of two you'll end up with discontinuities in the first derivative at each integer. If you use the 2nd degree polynomial in the PDF, it makes the first derivative continuous but the second derivative is then discontinuous.

Your approximation has a (very slight) discontinuity in the first derivative (and subsequent derivatives) when you convert from the end of the window to the beginning (multiplied by two).

I think you can just add a tick after your function name to evaluate derivatives in desmos to see.

Post

mtytel wrote: Wed Feb 20, 2019 6:15 am
Smoothness of a function is about the continuity of derivatives of a function.
...
Your approximation has a (very slight) discontinuity in the first derivative (and subsequent derivatives) when you convert from the end of the window to the beginning (multiplied by two).

I think you can just add a tick after your function name to evaluate derivatives in desmos to see.
Does this mean the error situation on each end (equal error is best option?)? I prepared another approximation (same technique as earlier) which has about equal error at -0.5 and 0.5 but, dunno if this is the way to go.

Code: Select all

double my_pow2x(double x){
	double p = ((((0.00004190627326439316*x + 
			       0.0006011232071797441)*x + 
				   0.006938013800124066)*x +
				   0.06005662673708935)*x + 
				   0.3465735902799904)*x + 
				   1.0;		
	return p*p;
}
Error comparison :

Code: Select all

                                   absolute error 
   x     std::pow(2,x)        my_pow2x       z1202_pow2x 
-0.500   0.7071067812      7.535280e-08      3.893469e-06
-0.475   0.7194667900      5.650777e-08      3.690597e-06
...
+0.450   1.3660402568      3.837408e-08      8.234943e-06
+0.475   1.3899182198      5.425599e-08      8.073019e-06
+0.500   1.4142135624      7.535280e-08      7.786937e-06
Regarding desmos, couldn't find other derivative related examples but this : g(x)=(d/dx)f(x) ... did you mean this ?

Post

Here's an illustration on the smoothness difference:
https://www.desmos.com/calculator/3p8glvtjjk

Notice how the PDF's polynomial function and derivatives stay continuous through the piecewise transition.
Your function has a discontinuity in it when you piecewise it together. Even if you shift your polynomial to remove the discontinuity you'll have discontinuities in the first derivative.

Edit:
Though if you only want to evaluate on the -0.5 to 0.5 window your polynomial is better.

Post

Thanks, especially the desmos example.
This is new subject for me so I wonder(ed) why less accuracy works better in this.

Post

Z1202 wrote: Fri Feb 08, 2019 1:39 pm Following the neighbor thread viewtopic.php?f=33&t=519728
I thought I post some approximations of 2^x which I did a couple of years ago. The key feature of these approximations are that they are smooth at the seams, which makes them particularly suitable for quite a few music DSP needs even at a relatively low precision, which means saving some CPU.

I post the results in the form that I have them already (PDF). My apologies for the small inconvenience for having to download and unpack the file.
Thanks for posting about this Vadim. I really like the endpoint derivative matching criteria, but not specifying what the derivative has to be explicitly, great idea, and I hope I can apply this to other situations :)

I agree that matching the derivative at the endpoints is important, but in most listening tests I've done matching over C2 (2nd derivative) didn't make much difference. With this in mind up to 3rd order approximations you've done are very optimal, apart from perhaps some dc offset to make the error more symmetric (which you pointed out). But for 4th order and above I think it's good to make a few changes.

in the following g(x) is the nth order poly, and f(x) = 2^x

for 4th order I've found this system of equations keeps the error down a little, and is more symmetric in both positive and negative relative errors, but still not too much in it from what you've presented:
g(-17/1000) == f(-17/1000)
g(-1/2) == f(-1/2)
g(1/2) == 2 g(-1/2)
g'(1/2) == 2 g'(-1/2)
g''(1/2) == 2 g''(-1/2)

error +- 0.064 cents (vs 0.08 cents, so not much better)
g(x) = 0.999996375 + 0.692932426 x + 0.24015855 x^2 + 0.05669742 x^3 + 0.00998657 x^4

but for 5th order you have two degrees of freedom so if you forgo the 3rd and 4th derivative matches you can get the error down a lot:
g(-194/1000) == f(-194/1000)
g(129/1000) == f(129/1000)
g(-1/2) == f(-1/2)
g(1/2) == 2 g(-1/2)
g'(1/2) == 2 g'(-1/2)
g''(1/2) == 2 g''(-1/2)

error +- 0.00043 cents (vs 0.01 cents, so 23 times better!)
g(x) = 1.000000237 + 0.69314655 x + 0.24021519 x^2 + 0.05550965 x^3 + 0.00969821 x^4 + 0.00132508 x^5

Thanks again for sharing your ideas, really appreciated.
The Glue, The Drop - www.cytomic.com

Post

andy-cytomic wrote: Tue Feb 25, 2020 12:50 pmbut for 5th order you have two degrees of freedom so if you forgo the 3rd and 4th derivative matches you can get the error down a lot:
...........
error +- 0.00043 cents (vs 0.01 cents, so 23 times better!)
Hmmm, IIRC I was comparing my approximations to minimax and the error was always just marginally worse. But I don't really remember, maybe I didn't compare up to the 5th order, or maybe I did some mistakes. Thanks for digging further!

Post

Z1202 wrote: Tue Feb 25, 2020 1:10 pm Hmmm, IIRC I was comparing my approximations to minimax and the error was always just marginally worse. But I don't really remember, maybe I didn't compare up to the 5th order, or maybe I did some mistakes. Thanks for digging further!
It only really starts kicking in at 5th order or above, which your method of matching derivatives is still being applied for great results, but just up to 2nd derivative. Using Estrin's scheme instead of Horner evaluation pretty much means that powers of 2 wins in terms of most efficient evaluation, and 6 isn't a power of two, but 8 is, so I end up using 7th order polynomials and keep the error down to +-0.0000007 cent with almost unnoticeable cpu difference.

https://en.wikipedia.org/wiki/Estrin%27s_scheme
The Glue, The Drop - www.cytomic.com

Post

Bitshifting tricks are interesting but with memory being cheap is it not better to use a table/interpolation and avoid undefined behavior?

Post

KBonzo wrote: Tue Feb 25, 2020 6:26 pm Bitshifting tricks are interesting but with memory being cheap is it not better to use a table/interpolation and avoid undefined behavior?
Sure memory is cheap, but cache misses are expensive. There are intrinsics to do all the "tricks", so there is no undefined behaviour, and you can do them on a f32x4 or higher width SIMD as well, so you get at least 4 lots of 2^x for the price of 1 using the same small number of coefficients for all of them, so it stays in the cache easily.

The best way to know for sure which is quicker is to test both methods. In my testing using polynomial approximations, or even rational approximations ends up being quicker than table based approaches, but I still use tables for pre-computing complicated shapes.
The Glue, The Drop - www.cytomic.com

Post

andy-cytomic wrote: Tue Feb 25, 2020 3:32 pmIt only really starts kicking in at 5th order or above, which your method of matching derivatives is still being applied for great results, but just up to 2nd derivative. Using Estrin's scheme instead of Horner evaluation pretty much means that powers of 2 wins in terms of most efficient evaluation, and 6 isn't a power of two, but 8 is, so I end up using 7th order polynomials and keep the error down to +-0.0000007 cent with almost unnoticeable cpu difference.
The question is though, whether at the error of 7e-7 cent the continuity of derivatives is audible at all (I'd guess not). Also it seems that it's also beyond the 32bit FP precision anyway. Therefore I'd consider simply using minimax in that range ;)

Post

Z1202 wrote: Wed Feb 26, 2020 6:41 am
andy-cytomic wrote: Tue Feb 25, 2020 3:32 pmIt only really starts kicking in at 5th order or above, which your method of matching derivatives is still being applied for great results, but just up to 2nd derivative. Using Estrin's scheme instead of Horner evaluation pretty much means that powers of 2 wins in terms of most efficient evaluation, and 6 isn't a power of two, but 8 is, so I end up using 7th order polynomials and keep the error down to +-0.0000007 cent with almost unnoticeable cpu difference.
The question is though, whether at the error of 7e-7 cent the continuity of derivatives is audible at all (I'd guess not). Also it seems that it's also beyond the 32bit FP precision anyway. Therefore I'd consider simply using minimax in that range ;)
Yeah, for 1/octave single-precision control signal, with zero-point somewhere around the middle of the hearing range, the precision 4-5 octaves up or down (which is about the audible range) gets you about 0.0005 cents worth of accuracy before you even exponentiate. That said, I would argue you probably don't need anywhere NEAR that good accuracy for typical tuning purposes. I used to have a table based approximation with about 0.02 cents and never noticed any issues (even before I eventually added lerps to the thing).

That said, the approximation I've been using lately is 2DaT's exp_mp (this one I think: viewtopic.php?p=7161124#p7161124) which is about 3(?) ulps for single precision.

Post

mystran wrote: Wed Feb 26, 2020 10:24 amThat said, I would argue you probably don't need anywhere NEAR that good accuracy for typical tuning purposes. I used to have a table based approximation with about 0.02 cents and never noticed any issues (even before I eventually added lerps to the thing).
The potentially sensitive areas would be detuned oscillator beating and converting audio-rate modulation signals, where subtle spectral differences can become audible. I'm not sure what's the required precision range for that off the top of my head.

Post

Z1202 wrote: Wed Feb 26, 2020 1:07 pm
mystran wrote: Wed Feb 26, 2020 10:24 amThat said, I would argue you probably don't need anywhere NEAR that good accuracy for typical tuning purposes. I used to have a table based approximation with about 0.02 cents and never noticed any issues (even before I eventually added lerps to the thing).
The potentially sensitive areas would be detuned oscillator beating and converting audio-rate modulation signals, where subtle spectral differences can become audible. I'm not sure what's the required precision range for that off the top of my head.
Beating of detuned oscillators (at higher frequencies), sure. Whether spectral differences would be noticeable is a trickier question, because it probably depends on how clean the rest of the signal path is.

Either way, in practice it's usually cheap enough to come within a few ulps of single precision that there probably isn't any good reason not to do so. I'm somewhat skeptical on whether continuity really matters at that point though (and for something like Newton-Raphson that requires continuity, it's probably faster to just improve the approximation until you can assume dexp(x)/dx=exp(x) and skip a separate evaluation for the derivative).

Post

mystran wrote: Wed Feb 26, 2020 10:24 am Yeah, for 1/octave single-precision control signal, with zero-point somewhere around the middle of the hearing range, the precision 4-5 octaves up or down (which is about the audible range) gets you about 0.0005 cents worth of accuracy before you even exponentiate. That said, I would argue you probably don't need anywhere NEAR that good accuracy for typical tuning purposes. I used to have a table based approximation with about 0.02 cents and never noticed any issues (even before I eventually added lerps to the thing).

That said, the approximation I've been using lately is 2DaT's exp_mp (this one I think: viewtopic.php?p=7161124#p7161124) which is about 3(?) ulps for single precision.
Well these are the coefficients for the linked approximation for 2^x in the range -1/2 to 1/2 (shown orange):
{1.000000119000000, 0.6931469440000000, 0.2402212024000000, 0.05550713092000000, 0.009675540961000000, 0.001327647245000000}

The above isn't even C0. If you want something with a lower error try this one which is at least C0 continuous, and has a more symmetric error (shown blue):
{1.000000088673463, 0.69314693211407, 0.24022037362574, 0.0555072548370, 0.0096798351988, 0.0013285658116}

relative error plots of the two:
fit.png

edit: and here are the equations so anyone can tweak it

Code: Select all

g[x_] = a0 + a1 x + a2 x^2 + a3 x^3 + a4 x^4 + a5 x^5;
f[x_] = 2^x;
Solve[{g[-1/2] == f[-1/2], g[1/2] == 2 g[-1/2], 
   g[-378/1000] == f[-378/1000], g[-157/1000] == f[-157/1000], 
   g[110/1000] == f[110/1000], g[354/1000] == f[354/1000]}, 
   {a0, a1, a2, a3, a4, a5}];
You do not have the required permissions to view the files attached to this post.
Last edited by andy-cytomic on Thu Feb 27, 2020 7:13 am, edited 3 times in total.
The Glue, The Drop - www.cytomic.com

Post Reply

Return to “DSP and Plugin Development”