Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why not use Double or Float to represent currency?

I've always been told never to represent money with double or float types, and this time I pose the question to you: why?

I'm sure there is a very good reason, I simply do not know what it is.

like image 817
Fran Fitzpatrick Avatar asked Sep 16 '10 19:09

Fran Fitzpatrick


People also ask

Why you should never use float and double for monetary?

All floating point values that can represent a currency amount (in dollars and cents) cannot be stored exactly as it is in the memory. So, if we want to store 0.1 dollars (10 cents), float/double can not store it as it is.

Should I use float or double for currency?

Float & Double are bad for financial (even for military use) world, never use them for monetary calculations. If precision is one of your requirements, use BigDecimal instead.

Which data type is best for currency amounts?

The best datatype to use for currency in C# is decimal. The decimal type is a 128-bit data type suitable for financial and monetary calculations. The decimal type can represent values ranging from 1.0 * 10^-28 to approximately 7.9 * 10^28 with 28-29 significant digits.

Why is double not accurate?

doubles are not exact. It is because there are infinite possible real numbers and only finite number of bits to represent these numbers.


2 Answers

From Bloch, J., Effective Java, (2nd ed, Item 48. 3rd ed, Item 60):

The float and double types are particularly ill-suited for monetary calculations because it is impossible to represent 0.1 (or any other negative power of ten) as a float or double exactly.

For example, suppose you have $1.03 and you spend 42c. How much money do you have left?

System.out.println(1.03 - .42); 

prints out 0.6100000000000001.

The right way to solve this problem is to use BigDecimal, int or long for monetary calculations.

Though BigDecimal has some caveats (please see currently accepted answer).

like image 41
dogbane Avatar answered Oct 20 '22 11:10

dogbane


Because floats and doubles cannot accurately represent the base 10 multiples that we use for money. This issue isn't just for Java, it's for any programming language that uses base 2 floating-point types.

In base 10, you can write 10.25 as 1025 * 10-2 (an integer times a power of 10). IEEE-754 floating-point numbers are different, but a very simple way to think about them is to multiply by a power of two instead. For instance, you could be looking at 164 * 2-4 (an integer times a power of two), which is also equal to 10.25. That's not how the numbers are represented in memory, but the math implications are the same.

Even in base 10, this notation cannot accurately represent most simple fractions. For instance, you can't represent 1/3: the decimal representation is repeating (0.3333...), so there is no finite integer that you can multiply by a power of 10 to get 1/3. You could settle on a long sequence of 3's and a small exponent, like 333333333 * 10-10, but it is not accurate: if you multiply that by 3, you won't get 1.

However, for the purpose of counting money, at least for countries whose money is valued within an order of magnitude of the US dollar, usually all you need is to be able to store multiples of 10-2, so it doesn't really matter that 1/3 can't be represented.

The problem with floats and doubles is that the vast majority of money-like numbers don't have an exact representation as an integer times a power of 2. In fact, the only multiples of 0.01 between 0 and 1 (which are significant when dealing with money because they're integer cents) that can be represented exactly as an IEEE-754 binary floating-point number are 0, 0.25, 0.5, 0.75 and 1. All the others are off by a small amount. As an analogy to the 0.333333 example, if you take the floating-point value for 0.01 and you multiply it by 10, you won't get 0.1. Instead you will get something like 0.099999999786...

Representing money as a double or float will probably look good at first as the software rounds off the tiny errors, but as you perform more additions, subtractions, multiplications and divisions on inexact numbers, errors will compound and you'll end up with values that are visibly not accurate. This makes floats and doubles inadequate for dealing with money, where perfect accuracy for multiples of base 10 powers is required.

A solution that works in just about any language is to use integers instead, and count cents. For instance, 1025 would be $10.25. Several languages also have built-in types to deal with money. Among others, Java has the BigDecimal class, and C# has the decimal type.

like image 122
zneak Avatar answered Oct 20 '22 11:10

zneak