I have some legacy code where a timewasting loop has been included to allow time for an eeprom read to complete (bad practice):
for(i = 0; i < 50; i++);
However, peculiar things happen when compiler optimizations are switched on for speed. It is not necessarily connected with that statement, but I would like to know if the compiler might just optimize the time delay away
The optimization must be correct, it must not, in any way, change the meaning of the program. Optimization should increase the speed and performance of the program. The compilation time must be kept reasonable. The optimization process should not delay the overall compiling process.
In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution time, memory footprint, storage size, and power consumption (the last three being popular for portable computers).
It depends on the type of i
. If it is just a plain integer type that isn't used apart from inside the loop, there are no side effects and the compiler is free to optimize away the whole thing.
If you declare i
as volatile
however, the compiler is forced to generate code that increments the variable and reads it, at each lap of the loop.
This is one of many reasons why you should not use "burn-away" loops like these in embedded systems. You also occupy 100% CPU and consume 100% current. And you create a tight coupling between your system clock and the loop, which isn't necessarily linear.
The professional solution is always to use an on-chip hardware timer instead of "burn-away" loops.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With