The following code:
DecimalFormat df = new DecimalFormat();
df.setMinimumFractionDigits(2);
int zahl1 = 10;
int zahl2 = 18;
double double1 = zahl1 / zahl2;
System.out.println("Double1 = " + df.format(double1));
double double2 = Math.ceil(zahl1 / zahl2);
System.out.println("Double2 = " + df.format(double2));
double double3 = Math.round((zahl1 / zahl2) * 100) / 100;
System.out.println("Double3 = " + df.format(double3));
float float1 = Math.round((zahl1 / zahl2) * 100) / 100;
System.out.println("Float1 = " + df.format(float1));
long long1 = zahl1 / zahl2;
System.out.println("long1 = " + df.format(long1));
long long3 = Math.round((zahl1 / zahl2) * 100) / 100;
System.out.println("long3 = " + df.format(long3));
renders the following output:
Double1 = 0,00
Double2 = 0,00
Double3 = 0,00
Float1 = 0,00
long1 = 0,00
long3 = 0,00
I read quite a bit now about floats and doubles and what not in Java, but I don't seem to fully understand how to get to my desired outcome of 0.6.
What am I doing wrong?
Since zahl1 and zahl2 are of type int, the following:
double double1 = zahl1 / zahl2;
uses integer division, the result of which is 0.
It does not matter that you're assigning the result to a variable of type double: the (integer) division happens first, and its result is then widened to a double.
To fix, change that line to:
double double1 = zahl1 / (double)zahl2;
You have to cast an Integer when dividing it by another Integer to float:
(float)zahl1 / (float)zahl2
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With