Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Python vs C++ Precision

I am trying to reproduce a C++ high precision calculation in full python, but I got a slight difference and I do not understand why.

Python:

from decimal import *
getcontext().prec = 18
r = 0 + (((Decimal(0.95)-Decimal(1.0))**2)+(Decimal(0.00403)-Decimal(0.00063))**2).sqrt()
# r = Decimal('0.0501154666744709107')

C++:

#include <iostream>
#include <math.h>

int main()
{
    double zx2 = 0.95;
    double zx1 = 1.0;
    double zy2 = 0.00403;
    double zy1 = 0.00063;
    double r;
    r = 0.0 + sqrt((zx2-zx1)*(zx2-zx1)+(zy2-zy1)*(zy2-zy1));
    std::cout<<"r = " << r << " ****";

    return 0;
}
// r = 0.050115466674470907 ****

There is this 1 showing up near the end in python but not in c++, why ? Changing the precision in python will not change anything (i already tried) because, the 1 is before the "rounding".

Python: 0.0501154666744709107 
C++   : 0.050115466674470907

Edit: I though that Decimal would convert anything passed to it into a string in order to "recut" them, but the comment of juanpa.arrivillaga made me doubt about it and after checking the source code, it is not the case ! So I changed to use string. Now the Python result is the same as WolframAlpha shared by Random Davis: link.

like image 953
Taknok Avatar asked Oct 25 '25 14:10

Taknok


2 Answers

The origin of the discrepancy is that Python Decimal follows the more modern IBM's General Decimal Arithmetic Specification.

In C++ however there too exist support available for 80-bit "extended precision" through the long double format.

For reference, the standard IEEE-754 floating point doubles contain 53 bits of precision.

Here below the C++ example from the question, refactored using long doubles:

#include <iostream>
#include <math.h>
#include <iomanip>

int main()
{
    long double zx2 = 0.95;
    long double zx1 = 1.0;
    long double zy2 = 0.00403;
    long double zy1 = 0.00063;
    long double r;
    r = 0.0 + sqrt((zx2-zx1)*(zx2-zx1)+(zy2-zy1)*(zy2-zy1));
    std::fixed;
    std::cout<< std::setprecision(25) << "r = " << r << " ****";  //25 floats
    // prints "r = 0.05011546667447091067728042 ****"
    return 0;
}
like image 148
Giogre Avatar answered Oct 27 '25 04:10

Giogre


As was pointed out by @juanpa.arrivillaga, in your Python script you are passing floats to the constructors of Decimals. This defeats the entire purpose of Decimal since the float that gets passed in may not have a perfect binary representation; and it's that binary representation that's passed into the constructor, not what you actually typed.

So, the solution is to just add quotes around everything, so that the Decimal has a complete understanding of what you typed:

from decimal import *
getcontext().prec = 30 #increased precision for demo purposes
r = 0 + (((Decimal('0.95')-Decimal('1.0'))**Decimal('2'))+(Decimal('0.00403')-Decimal('0.00063'))**Decimal('2')).sqrt()
print(r)

Output:

0.0501154666744708663897193971169

Versus Wolfram Alpha:

0.0501154666744708663897193971168629...

So, clearly the output here is correct; more so than C++ or if you pass floats to the Decimals in the Python script. Clearly the C++ script is suffering from an accumulation of errors, probably rounding errors.

like image 31
Random Davis Avatar answered Oct 27 '25 03:10

Random Davis



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!