I came across with the following issue On Apple LLVM compiler 3.1:
int numIndex = 0;
int *indices = (int *)malloc(3 * sizeof(int));
indices[numIndex] = numIndex++;
indices[numIndex] = numIndex++;
indices[numIndex] = numIndex++;
for (int i = 0; i < 3; i++) {
NSLog(@"%d", indices[i]);
}
Output: 1 0 1
And
int numIndex = 0;
int indices[3];
indices[numIndex] = numIndex++;
indices[numIndex] = numIndex++;
indices[numIndex] = numIndex++;
for (int i = 0; i < 3; i++) {
NSLog(@"%d", indices[i]);
}
Output: 0 0 1
I'm expecting 0 1 2 as output. The same code using LLVM GCC 4.2 produces the right output. It's there any optimization flags that I'm missing or something I'm misunderstanding?
So it seems behavior is as follows
int numIndex = 0;
int indices[3];
indices[numIndex] = numIndex++;
here the right hand side is evaluated first, returns 0, and increments numIndex by one, then the right side is evaluated, so indices[1] gets 0
indices[numIndex] = numIndex++;
here the right hand side is evaluated first , returns 1, and increments numIndex by one,then the right side is evaluated, so indices[2] gets 1
indices[numIndex] = numIndex++;
here the right hand side is evaluated first , returns 2, and increments numIndex by one,then the right side is evaluated, so indices[3] gets 2 (and you are actually out of bounds)
And note you are never really assigned indices[0], so it could be anything (in my test it was the max int number)
EDIT- Seems from the comment given that this is behavior is actually undefined, so even though i observed this, its not a definte answer
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With