0

I have two server codes:

  1. the first server: send the client a char each time until the string is finished

    int
    main(int argc, char **argv)
    {
        int                 listenfd, connfd;
        struct sockaddr_in  servaddr;
        char                buff[MAXLINE];
        time_t              ticks;
            char                            temp[1];
            int                             i = 0;
    
        listenfd = Socket(AF_INET, SOCK_STREAM, 0);
    
        bzero(&servaddr, sizeof(servaddr));
        servaddr.sin_family      = AF_INET;
        servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
        servaddr.sin_port        = htons(9999); /* daytime server */
    
        Bind(listenfd, (SA *) &servaddr, sizeof(servaddr));
    
        Listen(listenfd, LISTENQ);
    
        for ( ; ; ) {
            connfd = Accept(listenfd, (SA *) NULL, NULL);
    
            ticks = time(NULL);
            snprintf(buff, sizeof(buff), "%.24s\r\n", ctime(&ticks));
    
            for(i = 0; i < strlen(buff); i++)
            {
                temp[0] = buff[i];
                Write(connfd, temp, strlen(temp));
            }
    
            Close(connfd);
        }
    }
    
  2. the second server: send the client a string

    int
    main(int argc, char **argv)
    {
        int                 listenfd, connfd;
        struct sockaddr_in  servaddr;
        char                buff[MAXLINE];
        time_t              ticks;
            char                            temp[1];
            int                             i = 0;
    
        listenfd = Socket(AF_INET, SOCK_STREAM, 0);
    
        bzero(&servaddr, sizeof(servaddr));
        servaddr.sin_family      = AF_INET;
        servaddr.sin_addr.s_addr = htonl(INADDR_ANY);
        servaddr.sin_port        = htons(9999); /* daytime server */
    
        Bind(listenfd, (SA *) &servaddr, sizeof(servaddr));
    
        Listen(listenfd, LISTENQ);
    
        for ( ; ; ) {
            connfd = Accept(listenfd, (SA *) NULL, NULL);
    
            ticks = time(NULL);
            snprintf(buff, sizeof(buff), "%.24s\r\n", ctime(&ticks));
    
            Write(connfd, buff, strlen(buff));
            Close(connfd);
        }
    }
    
  3. the client:receive the chars sent by the server

    int
    main(int argc, char **argv)
    {
        int                 sockfd, n;
        char                recvline[MAXLINE + 1];
        struct sockaddr_in  servaddr;
            int count = 0;
    
        if (argc != 2)
            err_quit("usage: a.out <IPaddress>");
    
        if ( (sockfd = socket(AF_INET, SOCK_STREAM, 0)) < 0)
            err_sys("socket error");
    
        bzero(&servaddr, sizeof(servaddr));
        servaddr.sin_family = AF_INET;
        servaddr.sin_port   = htons(9999);  /* daytime server */
        if (inet_pton(AF_INET, argv[1], &servaddr.sin_addr) <= 0)
            err_quit("inet_pton error for %s", argv[1]);
    
        if (connect(sockfd, (SA *) &servaddr, sizeof(servaddr)) < 0)
            err_sys("connect error");
    
        while ( (n = read(sockfd, recvline, MAXLINE)) > 0) {
            recvline[n] = 0;    /* null terminate */
                    count++;
            if (fputs(recvline, stdout) == EOF)
                err_sys("fputs error");
        }
        if (n < 0)
            err_sys("read error");
            printf("read time:%d\n", count);
    
        exit(0);
    }
    

the result is both of the output of variable count is 1. My question is why the first server's output is 1, I think the result should be strlen(buff) for the 1st server?

PS:I run the server and client on the same machine.

John Kugelman
  • 349,597
  • 67
  • 533
  • 578
Charles0429
  • 1,406
  • 5
  • 15
  • 31
  • 1
    There is a buffer overflow in the first server: `char temp[1];` must either be `char temp[2]; temp[1] = 0;` or you must replace `strlen(temp)` with 1. – Aaron Digulla Sep 24 '13 at 15:08

1 Answers1

2

TCP is a stream protocol. As such the number of writes on one side will not cause the same amount of reads on the other side since the protocol doesn't preserve information about how the writes into the socket were made.

Usually, on the sender side there's a delay before a packet is sent in case you write more data to the socket so that more data can be stuffed into the same packet. One of the reasons for it is that a badly written server might flood the network with single byte packets.

On the receiver side, the protocol doesn't know why your data might have arrived as separate packets, it might have been split up because of the MTU, it might have been reassembled by some packet inspection software or appliance on the way, so whenever you read from your socket it will give you as much data as it can regardless of how it was sent to you.

On a local machine like in your setup it's likely that the client isn't even running while the server is writing, so even without buffering on the sender side it will not start reading until the server has written everything and therefore it will read everything in one go. Or not, you might be unlucky, your server gets preempted for long enough that the TCP implementation in your kernel thinks that there won't be any more data you'll be sending, send a single byte to the client, the client gets scheduled to run before the server runs again and the client will receive just one byte in the first read.

Art
  • 19,807
  • 1
  • 34
  • 60
  • A much more important reason is that buffering is hard to get right and most applications profit from it, so buffering is the default. To get unbuffered output, you can `flush()` the socket but depending on the amount of data and the physical transport, you will transmit many, many bytes of header over the wire with just a few bytes of payload plus latency between packets plus other overhead. – Aaron Digulla Sep 24 '13 at 15:11
  • 2
    Buffering is an implementation detail. The main point is that it's a stream protocol and has no packet boundaries, so the number of reads and writes might not be the same and it's unrelated to how the stream is implemented (with or without buffering). It might not even buffer things at all on the sending side and the single read is a side effect of the receiver side processing all the packets in one system call. – Art Sep 24 '13 at 15:44
  • My experience is that buffering is the most important "detail" about the whole thing since it's the one thing that most people stumble over because they don't expect it to happen (or happen differently). So, yes, it's streaming and uses packets underneath but for using it, buffering is what you need to take into account, first. – Aaron Digulla Sep 24 '13 at 16:07
  • Then are there any ways making tcp socket send data one char each time? – Charles0429 Sep 25 '13 at 01:12
  • 1
    Yes, there are, or at least there's a way to tell the TCP implementation to not buffer and send things as soon as they get written. The `TCP_NODELAY` socket option is available on some implementations (although it does other things too, read the manual for your system before using it and think through if you really want it). But there's no guarantee that you will receive them one char at a time. Generally I wouldn't recommend using any option like that for anything other than interactive TCP sessions because it basically makes your TCP streams misbehave for a minor gain in latency. – Art Sep 25 '13 at 07:03