I wrote a program that computes the base of natural logarithms (known as e in mathematics) using the following well-known formula:
e = (1 + 1.0/n) ** n
The code is:
def e_formula(lim):
n = lim
e = (1 + 1.0/n) **n
return e
I set up a test that iterates from 101 to 10100:
if __name__ == "__main__":
for i in range(1,100):
print e_formula(10**i)
However the following results around 10**11 blow up.
Actual results from shell:
2.5937424601
2.70481382942
2.71692393224
2.71814592682
2.71826823719
2.7182804691
2.71828169413
2.71828179835
2.71828205201
2.71828205323
2.71828205336
2.71852349604
2.71611003409
2.71611003409
3.03503520655
1.0
I am looking for a reason for this, either to do with the result exceeding the floating-point limit in a 32-bit machine or because of the way Python itself computes floating point numbers. I am not looking for a better solution; I just want to understand why it blows up.
This is simply due to the limited precision of floating point numbers. You get about 15 significant digits.
You are taking (1 + very_small_number). Most of the digits of very_small_number are truncated at this stage.
The **n just multiplies this error
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With