We have an application that broadcasts data using UDP from a server system to client applications running on multiple Windows XP PC's. This is on a LAN, typically Gigabit. This has been running fine for some years.
We now have a requirement to have two (or more) of the client applications running on each quad core PC, with each instance of the application receiving the broadcast data. The method I have used to implement this is to give each client PC multiple IP addresses. Each client app then connects to the server using the same port number but on a different IP. This works functionally but the performance for some reason is very poor. My data transfer rate is cut by around a factor of 10!
To get multiple IP addresses I have tried both using two NIC adapters and assigning multiple IP addresses to a single NIC in the advanced TCP/IP network properties. Both methods seem to give similarly poor performance. I also tried using several different manufacturers NICs but that didn't help either.
One thing I did notice is that the data seems to come over more fragmented. With just a single client on a PC if I send 20kBytes of data to the client it almost always receives it all in one chunk. But with two clients running the data seems to mostly come over in blocks the size of a frame (1500 bytes) so my code has to iterate around more times. But I wouldn't expect this on its own to cause such a dramatic performance hit.
So I guess my question is does any one know why the performance is so much slower and if anything can be done to speed it up?
I know I could re-design things so that the server only sends data to one client per PC, and that client could then mirror the data on to the other clients on the same PC. But that is a major redesign and re-coding effort so I'd like to keep that as a last resort.