I Want to measure how much time takes for each socket to transfer 100MB data file so I implement many of sockets type like TCP, some pipe's mmaps etc..
The process to measure is (on the client side )::= before I send the data I take time and after the data is sent I stop the time.
(On the Server side ) ::= before I receive the data I take time and after the file is written I take again time.
the problem is on the server side I never out from the while loop in UDP / UDS Dgram
so I never can measure the time it takes to transfer data between them.
For example here is half of my server the while part in UDP ipv6 server:
clock_t start = 0, end = 0;
FILE *fp;
char *filename = "new_data.txt";
fp = fopen(filename, "w");
/* now wait until we get a datagram */
printf("waiting for a datagram...\n");
clilen = sizeof(client_addr);
start = clock();
while (1) {
ssize_t bytes = recvfrom(sock, buffer, 1024, 0,
(struct sockaddr *) &client_addr,
&clilen);
if (bytes < 0) {
perror("recvfrom failed");
exit(4);
}
fprintf(fp, "%s", buffer);
bzero(buffer, 1024);
}
close(sock);
//fclose(fp);
end = clock();
double cpu_time_used = ((double) (end - start)) / CLOCKS_PER_SEC;
printf("Receive in %f seconds\n", cpu_time_used);