While trying to implement my own version of the C function memccpy(), I also happened upon this other person's implementation of memccpy() on stack overflow and tested their variant against the original. It reproduced the same results as the C standard function with different string and integer array inputs I gave it. The problem is I don't understand why their version doesn't segfault on certain inputs such as this integer array.
I tried to see what would happen if the length taken exceeded that of the input array. Of course I expected a segfault, but to my surprise it did not. Here is the function implementation:
void *ft_memccpy(void *str_dest, const void *str_src, int c,
size_t n)
{
unsigned int i;
char *dest;
char *src;
char *ptr;
dest = (char *)str_dest;
src = (char *)str_src;
i = 0;
ptr = 0;
while (i < n && ptr == 0)
{
dest[i] = src[i];
if (src[i] == ((char)c))
ptr = dest + i + 1;
i++;
}
return (ptr);
}
And the code used to test it:
int main()
{
int num1[5] = {1, 2, 3, 4, 5};
int num2[5] = {0, 0, 0, 0, 0};
int (*num1p)[5] = &num1;
int (*num2p)[5] = &num2;
for (int i = 0; i < 5; i++)
{
printf("value before copy = %d\n", num2[i]);
}
//THE INPUT
ft_memccpy(num2p, num1p, 9, (sizeof(int)*8));
for (int i = 0; i < 5; i++)
{
printf("value after copy = %d\n", num2[i]);
}
return 0;
}
What I expected was for there to be a segfault since the parameters passed were 9 and 32 bytes for size (8 * sizeof(int)). I thought that since the array size itself was only 20 bytes, it would segfault once it passed 20 bytes on the line dst[i] = src[i], but it doesn't. Indeed when I pass these same parameters into the standard C version, it also does not segfault. What could be the reason for this?