So in my assignment I am testing the times it takes for different copying functions to copy. One of them I am a bit curious on the results. In one my copy functions it involves allocating memory like so:
int copyfile3(char* infilename, char* outfilename, int size) {
int infile; //File handles for source and destination.
int outfile;
infile = open(infilename, O_RDONLY); // Open the input and output files.
if (infile < 0) {
open_file_error(infilename);
return 1;
}
outfile = open(outfilename, O_WRONLY | O_CREAT, S_IRUSR | S_IWUSR);
if (outfile < 0) {
open_file_error(outfilename);
return 1;
}
int intch; // Character read from input file. must be an int to catch EOF.
char *ch = malloc(sizeof(char) * (size + 1));
gettimeofday(&start, NULL);
// Read each character from the file, checking for EOF.
while ((intch = read(infile, ch, size)) > 0) {
write(outfile, ch, intch); // Write out.
}
gettimeofday(&end, NULL);
// All done--close the files and return success code.
close(infile);
close(outfile);
free(ch);
return 0; // Success!
}
The main program allows the user to input the infile outfile copyFunctionNumber. If 3 is chosen the user can input a specific buffer size. So I was testing copying a file (6.3 MB) with different buffer sizes. When I choose 1024 it gives a difference of 42,000 microseconds, for 2000 it gives 26,000 microseconds, but for 3000 it gives 34,000 microseconds. My question is why does it go back up? And how could you tell what the perfect buffer size will be for the copying to take the least amount of time.