which kind of scale/base is that?

DSP, Plugin and Host development discussion.
RELATED
PRODUCTS

Post

hi all,

i'd like to model my lfo rate param like:
0.0f (min value) => 0.01 hz
0.5f (default value) => 5.01 hz
1.0f (max value) => 20.0 hz

but i'm not able to get those values.
here's my attempt:

Code: Select all

    const float base = 20.0f;

    float min = 0.01f;
    float vMin = log(min) / log(base);
    std::cout << vMin << " | " << std::pow(base, vMin) << std::endl;

    float max = 20.0f;
    float vMax = log(max) / log(base);
    std::cout << vMax << " | " << std::pow(base, vMax) << std::endl;

    float middle = rescale(0.5f, 0.0f, 1.0f, vMin, vMax); // 5.01hz
    std::cout << middle << " | " << std::pow(base, middle) << std::endl;
but:
1. middle position is 0.447214 (not 5.01)
2. scale is not [0.0, 1.0f] anymore

i think im overcomplicated such a easy thing...

EDIT: the same scale should applied for this kind of range, which looks pretty the same:
0 => 0
0.5 => 5.0
1 => 20

or

0 => 0.01
0.5 => 2.51
1 => 10
Last edited by Derozer on Fri Mar 11, 2022 1:51 pm, edited 1 time in total.

Post

You could use a rational mapper function
Rate = (a + b*x)/(1 + c* x)
Where x = 0…1 is your knob position and a, b, c are coefficients which are determined by your min/default/max values at x = 0/0.5/1.

Post

martinvicanek wrote: Fri Mar 11, 2022 11:49 am You could use a rational mapper function
Rate = (a + b*x)/(1 + c* x)
Where x = 0…1 is your knob position and a, b, c are coefficients which are determined by your min/default/max values at x = 0/0.5/1.
not sure about this function: http://fooplot.com/#W3sidHlwZSI6MCwiZXE ... IzMCJdfV0-

it seems totally different, isn't?

anyway, i simply try with pow2:

Code: Select all

    float rMin = std::pow(0.01f, 1.0f / 2.0f);
    std::cout << std::pow(rMin, 2.0f) << std::endl;

    float rMax = std::pow(20.0f, 1.0f / 2.0f);
    std::cout << std::pow(rMax, 2.0f) << std::endl;

    float rMed = rMin + (rMax - rMin) / 2.0f;
    std::cout << std::pow(rMed, 2.0f) << std::endl;
it prints:

0.01
20
5.22611

so I think the trick is to "blend" the x^2 function getting 5.01 instead of 5.22611? some multiplication factor?

with this scale (i.e. basic x ^ 2) it seems that this range is satisfied:
0 => 0
0.5 => 5.0
1 => 20

not sure why it isn't when the min value change...

Post

Derozer wrote: Fri Mar 11, 2022 1:09 pm not sure about this function: http://fooplot.com/#W3sidHlwZSI6MCwiZXE ... IzMCJdfV0-

it seems totally different, isn't?
Indeed, but that is not the function I suggested. You have to use the right coefficients for your data:
Rate = (0.01 + 6.47*x)/(1 - 0.676*x)

Edit: This is not quite correct, as juha_p points out below, sorry. It should read
Rate = (0.01 + 6.66*x)/(1 - 0.666*x)
Or use MrBeagleton's convenient form below.
Last edited by martinvicanek on Sat Mar 12, 2022 5:12 pm, edited 1 time in total.

Post

This is what I use. The main function is undefined for the linear case, so just switch to F2 when d==1.

https://www.desmos.com/calculator/z8xsvw2qub

Image

Post

Here's the rational function above in a different form to accept the 0/0.5/1 values directly.

https://www.desmos.com/calculator/zradh1lk8q

Image

Post

:tu: for the above answer.

I like to solve this type of equations with LibreOffice Calc's trend line feature (in this case, xy scatter type chart, polynomial type trend line):
LFO.png

which turns to 0.01 + x * (0.01 + 19.98 * x) (Horner Form) and which actually is equal to k*x^2+0.01, where k is your max frequency (20 in this case) and mid value is (max/4 + 0.01) (5.01 in this case) if you don't worry too much on going little above max value (i.e. + .01).

Range:

0 => 0.01
0.5 => 2.51
1 => 10

should work with same polynomial but you have to take account the change in max value (i.e. 19.98 - 10 = 9.98 .... 0.01 + x * (0.01 + 9.98 * x)) and that ratio between max and mid values stays original).

https://www.desmos.com/calculator/m9aiobfejd
martinvicanek wrote: Fri Mar 11, 2022 7:29 pm ...
You have to use the right coefficients for your data:
Rate = (0.01 + 6.47*x)/(1 - 0.676*x)
This function gives wrong answer for x = 0.5 so it's not what OP wanted :

https://www.desmos.com/calculator/py1ummlqyy
You do not have the required permissions to view the files attached to this post.
Last edited by juha_p on Sun Mar 13, 2022 12:15 pm, edited 5 times in total.

Post

thanks for the above answers, it seems you got to me many ways to do the job :)

it seems they "emulate" some kind of exponential formula, within range 0/1.

will now 2mul + 2 sum + 1 div (in case of MrBeagleton form or martin) faster than pow calculation on cpu?

Post

Derozer wrote: Sat Mar 12, 2022 1:42 pm ...
will now 2mul + 2 sum + 1 div (in case of MrBeagleton form or martin) faster than pow calculation on cpu?
If it's x^2 then you can use x*x instead since pow(x,2) would be function call and therefore little bit slower than mul.

In case of polynomials, you can use some formation as like Horner Form to get powers down as like in case of my polynomial equations (0.01 + x * (0.01 + 19.98 * x) and k*x*x+0.01.

https://godbolt.org/z/c5Khhjbo5
Last edited by juha_p on Mon Mar 14, 2022 8:43 pm, edited 1 time in total.

Post

Derozer wrote: Sat Mar 12, 2022 1:42 pm will now 2mul + 2 sum + 1 div (in case of MrBeagleton form or martin) faster than pow calculation on cpu?
As a rule of thumb, division is typically on the order of 10-20 cycles (single precision is faster, but it can also vary depending on actual values I think), where as log(), exp() and pow() are all library functions, so varies by library, but could be up to an order of magnitude slower (though much faster approximations are possible if one is willing to ignore all corner cases; standard library generally can't do this).

Post

Derozer wrote: Sat Mar 12, 2022 1:42 pm will now 2mul + 2 sum + 1 div (in case of MrBeagleton form or martin) faster than pow calculation on cpu?
Library pow is exceptionally slow on every platform. Avoid if possible. Even using exp(log(a)*b) instead of pow(a,b) is faster and accuracy difference is negligible.

Post

2DaT wrote: Sun Mar 13, 2022 6:22 pm Library pow is exceptionally slow on every platform. Avoid if possible. Even using exp(log(a)*b) instead of pow(a,b) is faster and accuracy difference is negligible.
Hmm... by Compiler Explorer (link in my earlier post) std::pow(base,2) isn't too slow IMO (or is there something not working well again (though, I get similar results here)).

EDIT: So, by Benchmark done in C E environment, it looks like k*std::pow(x,2.0f) + 0.01f is a bit faster than k*x*x + 0.01f.
Last edited by juha_p on Mon Mar 14, 2022 7:31 am, edited 1 time in total.

Post

juha_p wrote: Sun Mar 13, 2022 6:55 pm
2DaT wrote: Sun Mar 13, 2022 6:22 pm Library pow is exceptionally slow on every platform. Avoid if possible. Even using exp(log(a)*b) instead of pow(a,b) is faster and accuracy difference is negligible.
Hmm... by Compiler Explorer (link in my earlier post) std::pow(base,2) isn't too slow IMO (or is there something not working well again (though, I get similar results here)).
Looks like a special case optimization where the compilers (at least clang and gcc) rewrite as a multiply. It is possible (no idea) that there are other similar special cases (eg. other small integer and/or power-of-two exponents), but this is very much not representative (at all) of the general case, which ... is not great (to put it mildly).

Usually in performance sensitive code given pow(a,b) if b is a small integer, I'd just expand to multiplications manually and otherwise rewrite as exp(log(a)*b) which allows constant log(a) to be evaluated at compile time (quite common in practice) and/or exp() and log() to be replaced by faster approximations. Note that the actual exponential base is not important (as long as the logarithm matches), so rewriting exp2(log2(a)*b) is equally valid (often saves two multiplications when using approximations based on base-2 internally).

Post

mystran wrote: Sun Mar 13, 2022 7:31 pm
Looks like a special case optimization where the compilers (at least clang and gcc) rewrite as a multiply. It is possible (no idea) that there are other similar special cases (eg. other small integer and/or power-of-two exponents), but this is very much not representative (at all) of the general case, which ... is not great (to put it mildly).

...
In case of std::pow(base, int exponent), looks like GCC compiler with options -O3 -ffast-math enabled gives quite good performance even for large exponent values. Also looks like if integer exponent is written as float type (i.e. pow(base, 2.0f)) then it gives excellent performance (because of no need for data type conversion?).

Post

juha_p wrote: Sun Mar 13, 2022 9:50 pm In case of std::pow(base, int exponent), looks like GCC compiler with options -O3 -ffast-math enabled gives quite good performance even for large exponent values. Also looks like if integer exponent is written as float type (i.e. pow(base, 2.0f)) then it gives excellent performance (because of no need for data type conversion?).
Data type conversions are not that expensive, they are not "free" but we're talking about something like 4 cycles, which is completely negligible when it comes to a function like pow(). However, in this case there would not be any need to convert any data types anyway.

Something that people often seem to completely miss is that a modern compiler does not compile or optimize source code. Rather it parses the source code and breaks it down into elementary operations, then optimizes those elementary operations.

The most elementary optimization in a modern compiler is constant folding/propagation. If you have an integer literal (=constant) in an expression (including as an argument to a function) where the language rules state that this integer will be converted to a float, then the conversion is a constant operation. Such a constant operation is basically guaranteed to be evaluated at compilation time, so the integer value never makes it to the binary, rather it's already converted to float before code is generated.

Now, I'd imagine the compiler might still special-case optimize small integers even if the data type itself is float. As long as it's an exact small integer, it can be represented and compared exactly and at least with fast math I don't see any obvious reasons you couldn't special case this. You could try using 2.0001f or something and see what happens with performance.

The bottom line though is that unless you really understand how compilers work, then trying to reason about why something is or isn't expensive is kinda futile. The general case of pow(a,b) is typically very expensive, some special cases might not be, but the reason for the latter is almost certainly an explicit optimization in the compiler (or library) rather than anything with a simple "natural" reason.

Post Reply

Return to “DSP and Plugin Development”