I just found a strange problem in the programming language ruby, it isn't a big problem, but I just can't understand why it happens. It would interest me if someone knows the problem for my problem.
In ruby you can write 0 or 00, that doesn't matter, it comes to the same result.
If you run 0 === 00 you also get true meaning that the two inputs are exactly the same.
0.0 also equals 0, so logically 00.0 should also equal 0.0 but the problem is,
that if you try to use the number 00.0 then you'll just get an error. If you run for example:
a = 00.0
You get this error:
syntax error, unexpected tINTEGER
Of course I know this is a small problem, but as said I'd like to understand why the computer doesn't treat 00.0 the same as 0.0?
The thing is that when parsing and ruby sees that a number with more than two digits starts with the character 0, it parses it as an octal integer number. Thus, when it parses 00, it is 0 in octal which is the same as 0 in decimal. But if it finds a . then it is an invalid integer and that is the error it shows.
I tried "a = 00.0" in http://tryruby.com, and got:
SyntaxError: no .<digit> floating literal anymore put 0 before dot. near line 1: ""
Clearly the Ruby lexer isn't expecting that form of float.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With