Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Does 10m * 0.1m equal 1m in C#?

The code snippet

decimal one1 = 10m * 0.1m;
decimal one2 = 10m / 10m;
Console.WriteLine($"{one1}, {one2}, {one1 == one2}");

produces the output:

1.0, 1, True

Why does the first number print with a decimal point while the second number does not. If the answer lies in the fact that the decimal type does not have the precision to fully represent 0.1, then why does the equality operator return true?

like image 261
Phillip Ngan Avatar asked Feb 03 '26 01:02

Phillip Ngan


1 Answers

Floating point numbers are a complicated concept and made of 3 separate parts which store the information you would consider to be a number.

Additionally, how a compiler and computer architecture treats a floating point number is also not obvious. There are some very strange quirks when it comes to these types of numbers in general; how they store precision; what numbers they can deal with, and how the compiler and CPU does mathematics with them.

However, the reason you are getting different values is actually down to what is being stored for that number. They are in fact not the same bits and bytes in memory. The same number can be stored in multiple different ways, and may get there from different types of calculations (as you have shown).

Let's have a look

decimal one1 = 10m * 0.1m;
decimal one2 = 10m / 10m;

int[] bits = decimal.GetBits(one1);

Console.WriteLine("{0,31} {1,10:X8}{2,10:X8}{3,10:X8}{4,10:X8}", one1, bits[3], bits[2], bits[1], bits[0]);

int[] bits2 = decimal.GetBits(one2);

Console.WriteLine("{0,31} {1,10:X8}{2,10:X8}{3,10:X8}{4,10:X8}", one2, bits2[3], bits2[2], bits2[1], bits2[0]);

Output

                        1.0   00010000  00000000  00000000  0000000A
                          1   00000000  00000000  00000000  00000001

As you can see, they actually have a significantly different binary layout which means a different mantissa and scaling factor that represent the same thing.

As for the extra 0's when calling ToString(), the compiler knows about the significant zeros, they are part of the make up of the number and preserved based on their scales.

Luckily the CLR and architecture is smart enough to tell the difference.

like image 86
TheGeneral Avatar answered Feb 04 '26 15:02

TheGeneral



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!