0

My issue is that when i'm streaming a continuous stream of data over a LOCAL LAN network sometimes random bytes gets lost in the process.

As it is right now the code is set up to stream about 1027 bytes or so ~40 times a second over a lan and sometimes (very rare) one or more of the bytes are lost.

The thing that baffles me is that the actual byte isn't "lost" it is just set to 0 regardless of the original data. (I'm using TCP by the way)

Here's the sending code:

    public void Send(byte[] data)
    {
        if (!server)
        {
            if (CheckConnection(serv))
            {
                serv.Send(BitConverter.GetBytes(data.Length));
                serv.Receive(new byte[1]);
                serv.Send(data);
                serv.Receive(new byte[1]);
            }
        }
    }

and the receiving code:

    public byte[] Receive()
    {
        if (!server)
        {
            if (CheckConnection(serv))
            {
                byte[] TMP = new byte[4];
                serv.Receive(TMP);
                TMP = new byte[BitConverter.ToInt32(TMP, 0)];
                serv.Send(new byte[1]);
                serv.Receive(TMP);
                serv.Send(new byte[1]);
                return TMP;
            }
            else return null;
        }
        else return null;
    }

The sending and receiving of the empty bytes are just to keep the system in sync sorta. Personally i think that the problem lies on the receiving side of the system. haven't been able to prove that jet though.

Filburt
  • 17,626
  • 12
  • 64
  • 115
bbuubbi
  • 3
  • 2
  • 1
    What type is `serv`? if it is your own class please include the code for `Receive` – Scott Chamberlain Nov 15 '13 at 21:59
  • @ScottChamberlain serv is just a regular socket. im working with the bare sockets because i don't like how the network stream thing, it confuses me. – bbuubbi Nov 15 '13 at 22:25
  • 1
    TCP by definition cannot set a byte to zero, unless it was previously set to zero. There's a number of checks that go on at a pretty low level that would prevent this from happening. If mismatched data is sent over the wire, it gets CRC checked, fails the check, and gets resent. I suspect Scott Chamberlain is correct in his presumption that the arrays aren't being completely filled always before being sent out. – user2366842 Nov 15 '13 at 22:34

1 Answers1

5

Just because you give Receive(TMP) a 4 byte array does not mean it is going to fill that array with 4 bytes. The Receive call is allowed to put in anywhere between 1 and TMP.Length bytes in to the array. You must check the returned int to see how many bytes of the array where filled.

Network connections are stream based not message based. Any bytes you put on to the wire just get concatenated in to a big queue and get read in on the other side as it becomes available. So if you sent the two arrays 1,1,1,1 and 2,2,2,2 it is entirely possible that on the receiving side you call Receive three times with a 4 byte array and get

  • 1,1,0,0 (Receive returned 2)
  • 1,1,2,2 (Receive returned 4)
  • 2,2,0,0 (Receive returned 2)

So what you need to do is look at the values you got back from Receive and keep looping till your byte array is full.

byte[] TMP = new byte[4];

//loop till all 4 bytes are read
int offset = 0;
while(offset < TMP.Length)
{
    offset += serv.Receive(TMP, offset, TMP.Length - offset, SocketFlags.None);
}
TMP = new byte[BitConverter.ToInt32(TMP, 0)];

//I don't understand why you are doing this, it is not necessary.
serv.Send(new byte[1]); 

//Reset the offset then loop till TMP.Length bytes are read.
offset = 0;
while(offset < TMP.Length)
{
    offset += serv.Receive(TMP, offset, TMP.Length - offset, SocketFlags.None);
}

//I don't understand why you are doing this, it is not necessary.
serv.Send(new byte[1]);

return TMP;

Lastly you said "the network stream confuses you", I am willing to bet the above issue is one of the things that confused you, going to a lower level will not remove those complexities. If you want these complex parts gone so you don't have to handle them you will need to use a 3rd party library that will handle it for you inside the library.

Scott Chamberlain
  • 124,994
  • 33
  • 282
  • 431
  • thanks for the feedback, but i think that it is unlikely that that actually happen. just because if there are any length errors in the array the whole program would halt. but it dosen't so i don't know. il try that anyway and come back later. – bbuubbi Nov 15 '13 at 22:29
  • i have changed the code but i can't post it here. now it uses `while (sent != TMP.Length) sent += serv.Send(TMP, sent, TMP.Length - sent, SocketFlags.Partial);` `TMP` is the array of data and the `sent` is the var that counts how many bytes have been received. a similar thing is used for the receive to. haven't tested it yet though. – bbuubbi Nov 15 '13 at 22:40
  • 1
    `Send` does not need looping, it is garunteed to send all the data (unless you use the partial flag) `Receive` is the problem. From the documentation of Receive (emphasis mine) "*If you are using a connection-oriented Socket, the Receive method will read as much data as is available, **UP TO** the size of the buffer.*" – Scott Chamberlain Nov 15 '13 at 22:45
  • 1
    This is the right answer. We've been writing BSD-style socket receive code like this for over 25 years. – Ross Patterson Nov 15 '13 at 22:50
  • yea, i noticed that. the code wont run at all for the sending when i had my own code implemented. – bbuubbi Nov 15 '13 at 22:58
  • the `serv.Send(new byte[1])` is just to keep things in sync in my code. i don't have anything timing the sockets so the servers send buffer gets filled up because the client can't handle that amount of data resulting in a increasing lag and memory usage. and that send thing is just to keep the server waiting for the clients response and prevent the buffers from filling up. – bbuubbi Nov 15 '13 at 23:16
  • 1
    @bbuubbi You don't understand the workings of TCP. The server will already block when its socket and buffer fills up. That doesn't increase memory usage: the send buffer is pre-allocated. Adding another network exchange to delay something that would already be delayed anyway is what causes an increasing lag. Just delete it – user207421 Nov 15 '13 at 23:39
  • @bbuubbi to build on what EJP said, if you find memory allocations to be too large just [set the internal buffers of the socket](http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.receivebuffersize%28v=vs.110%29.aspx) to a smaller size. (btw 8k is the default size. Are you sure data buffering is the cause of your slowness and memory issues?) – Scott Chamberlain Nov 15 '13 at 23:47
  • @ScottChamberlain yea, im sure its the memory allocation that is causing the lag. And another thing that i noticed is that when i try to make the connection 2-way (currently the server is just flooding the client with data) for the client to respond to the server so that it can be halted and NOT getting the buffer filled. a artifact of that is the EXTREMELY slow update now. now its maybe 0.8 packets per second that gets through instead of the regular ~40. do you know what that is caused by? i have tried fiddling around with the buffer sizes and order of sending/receiving. but nothing works :/ – bbuubbi Nov 16 '13 at 00:31
  • I have no idea but the fact you find the normal stream classes "confusing" does not bode well. TCP/IP communication is a complicated beast, I still recommend you find a library to manage your communications for you. – Scott Chamberlain Nov 16 '13 at 00:34
  • @ScottChamberlain yea i should find some library or just use the network stream. ill look into it – bbuubbi Nov 16 '13 at 00:43