Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Any differences with %.1f and %.01f?

When i compiles a program with this code:

int main()
{
  float a;

  scanf("%f", &a);

  printf("%.1f\n", a); //Here

  return 0;
}

There is no difference with this other:

int main()
{
  float a;

  scanf("%f", &a);

  printf("%.01f\n", a); //Here

  return 0;
}

Anybody can tell me why?

like image 450
Chris Galard Avatar asked Oct 22 '25 16:10

Chris Galard


2 Answers

The number behind the period is the precision that specifies the number of digits after the decimal point of a floating-point value. The leading zero has no meanings.

The number before the period is the number that specifies the minimum field width. The leading zero will change the padding character from white space to 0.

like image 167
47dev47null Avatar answered Oct 24 '25 04:10

47dev47null


The digits after the decimal point specify the precision - the minimum number of digits which will be written. .1 and .01 both say to put at least 1 digit, and to pad the result with zeros if there is fewer than 1 digit. Plain %f is equivalent to %.6f, i.e. 6 digits after the decimal point.

like image 31
1'' Avatar answered Oct 24 '25 06:10

1''