I encounter a problem of overflow when use memchr() on mac os x.
Here is my test code:
#include <stdio.h>
#include <stdlib.h>
int main(void){
char *content="http\r\nUser";
int content_size = strlen(content);
char *contmem = malloc(content_size+1);
memset(contmem, '\0', content_size+1);
memcpy(contmem, content, content_size);
printf("%c\n", *(content+content_size));
printf("%c\n", *(contmem+content_size));
char *t = memchr(content, 't', content_size);
printf("%c\n", *t);
return 0;
}
It works normally on linux, i.e., my fedora 16, and prints the correct value of t. But when I run the same piece of code on Mac, Segmentation Fault occurs!!
After debugging with gdb, I take the saying:
(gdb) print t
$7 = 0xf4b <Address 0xf4b out of bounds>
Then I try to rewrite the memchr function in this test file:
static char*
memchr(const char *data, int c, unsigned long len){
char *tp = data;
unsigned long i;
for( i = 0; i<len; i++){
if((int)*tp == c){
return tp;
}else{
tp = tp+1;
}
}
}
And the output seems correct!
(gdb) print t
$1 = 0x100000f1d "ttp\r\nUser"
So I am confused with the abnormal behavior of memchr() on mac os, while other mem functions like memset() memcpy() works fine.
How can I run the test without rewriting the memchr() on mac??
Thanks.
The function memchr() is declared in string.h, for which there is no include directive in the posted code. This means an implicit function declaration will be generated by compiler (which should emit a warning) which returns an int. If the sizeof(int) and sizeof(char*) are different on your system this may explain the problem. Add:
#include <string.h>
Your code should indeed work. Your compiler may be using built-in versions of the mem***() functions. Try to include string.h to force the use of the libc versions.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With