I have the following python code:
In [1]: import decimal
In [2]: decimal.getcontext().prec = 80
In [3]: (1-decimal.Decimal('0.002'))**5
Out[3]: Decimal('0.990039920079968')
Shouldn't it match 0.99003992007996799440405766290496103465557098388671875
according to this http://www.wolframalpha.com/input/?i=SetPrecision%5B%281+-+0.002%29%5E5%2C+80%5D ?
Here's what's happening here: Because it looks like syntax from the Mathematica programming language, WolframAlpha is interpreting the input SetPrecision[(1 - 0.002)^5, 80] as Mathematica source code, which it proceeds to evaluate. In Mathematica, as others have surmised in other answers, 0.002 is a machine precision floating point literal value. Roundoff error ensues. Finally, the resulting machine precision value is cast by SetPrecision to the nearest 80-precision value.
To get around this, you have a couple of options.
Finally, I want to point out that in Mathematica, and by extension in a WolframAlpha query consisting of Mathematica code, you usually want N (documentation) rather than SetPrecision. They are often similar (identical in this case), but there is a subtle difference:
N works slightly harder but gets you the right number of correct digits (assuming the input is sufficiently precise).
So my final suggestion for using WolframAlpha to do this calculation via Mathematica Code is N[(1 - 2*^-3)^5, 80].
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With