To do a linear interpolation between two variables a
and b
given a fraction f
, I'm currently using this code:
float lerp(float a, float b, float f) { return (a * (1.0 - f)) + (b * f); }
I think there's probably a more efficient way of doing it. I'm using a microcontroller without an FPU, so floating point operations are done in software. They are reasonably fast, but it's still something like 100 cycles to add or multiply.
Any suggestions?
n.b. for the sake of clarity in the equation in the code above, we can omit specifying 1.0
as an explicit floating-point literal.
Linear interpolation is a method of curve fitting using linear polynomials to construct new data points within the range of a discrete set of known data points. Formula of Linear Interpolation. y = y 1 + ( x − x 1 ) ( y 2 − y 1 ) x 2 − x 1.
Linear interpolation is a method of calculating intermediate data between known values by conceptually drawing a straight line between two adjacent known values. An interpolated value is any point along that line.
Know the formula for the linear interpolation process. The formula is y = y1 + ((x – x1) / (x2 – x1)) * (y2 – y1), where x is the known value, y is the unknown value, x1 and y1 are the coordinates that are below the known x value, and x2 and y2 are the coordinates that are above the x value.
Disregarding differences in precision, that expression is equivalent to
float lerp(float a, float b, float f) { return a + f * (b - a); }
That's 2 additions/subtractions and 1 multiplication instead of 2 addition/subtractions and 2 multiplications.
Presuming floating-point math is available, the OP's algorithm is a good one and is always superior to the alternative a + f * (b - a)
due to precision loss when a
and b
significantly differ in magnitude.
For example:
// OP's algorithm float lint1 (float a, float b, float f) { return (a * (1.0f - f)) + (b * f); } // Algebraically simplified algorithm float lint2 (float a, float b, float f) { return a + f * (b - a); }
In that example, presuming 32-bit floats lint1(1.0e20, 1.0, 1.0)
will correctly return 1.0, whereas lint2
will incorrectly return 0.0.
The majority of precision loss is in the addition and subtraction operators when the operands differ significantly in magnitude. In the above case, the culprits are the subtraction in b - a
, and the addition in a + f * (b - a)
. The OP's algorithm does not suffer from this due to the components being completely multiplied before addition.
For the a=1e20, b=1 case, here is an example of differing results. Test program:
#include <stdio.h> #include <math.h> float lint1 (float a, float b, float f) { return (a * (1.0f - f)) + (b * f); } float lint2 (float a, float b, float f) { return a + f * (b - a); } int main () { const float a = 1.0e20; const float b = 1.0; int n; for (n = 0; n <= 1024; ++ n) { float f = (float)n / 1024.0f; float p1 = lint1(a, b, f); float p2 = lint2(a, b, f); if (p1 != p2) { printf("%i %.6f %f %f %.6e\n", n, f, p1, p2, p2 - p1); } } return 0; }
Output, slightly adjusted for formatting:
f lint1 lint2 lint2-lint1 0.828125 17187500894208393216 17187499794696765440 -1.099512e+12 0.890625 10937500768952909824 10937499669441282048 -1.099512e+12 0.914062 8593750447104196608 8593749897348382720 -5.497558e+11 0.945312 5468750384476454912 5468749834720641024 -5.497558e+11 0.957031 4296875223552098304 4296874948674191360 -2.748779e+11 0.972656 2734375192238227456 2734374917360320512 -2.748779e+11 0.978516 2148437611776049152 2148437474337095680 -1.374390e+11 0.986328 1367187596119113728 1367187458680160256 -1.374390e+11 0.989258 1074218805888024576 1074218737168547840 -6.871948e+10 0.993164 683593798059556864 683593729340080128 -6.871948e+10 1.000000 1 0 -1.000000e+00
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With