-2

If you had a wired network, 20 PCs hooked up to a 100 Mbit/sec switch (same onboard ethernet port speed) and you were just sending some test data round. What is the technical explanation as to why 20 machines sending test data around this network to each other is slower than one to one?

I mean I know a busy network means it's slower but I'm really trying to understand some more technical details.

Thanks for any help

  • 3
    "is slower than one to one" [citation needed] – Ignacio Vazquez-Abrams Apr 15 '12 at 01:03
  • I conducted an experiment and I set up a multi-threaded receiver across a wired network. I increased the number of machines up to 15 that simultaneously sent data and the overall data rate slowed down. I then tested a different method by setting up a pair of computer, so instead of 15 machines going to 1, it was 15 machines each sending to another 15. My results showed that on average the 2nd experiment was a lot faster. I'm rather confused as to explaining this? Any ideas? – Ed Briscoe Apr 15 '12 at 01:12
  • *average throughput was a lot higher I mean – Ed Briscoe Apr 15 '12 at 01:13
  • It sounds like something specific to the way you tested or the environment you tested in. Was this TCP or UDP? How did the transmitters pace themselves? – David Schwartz Apr 15 '12 at 02:03
  • i used pcattcp, not too sure of the inner workings of it – Ed Briscoe Apr 15 '12 at 02:30
  • Your results are unusual. It's hard for us to speculate what accounts for them without even being able to see them, without knowing what network technologies were involved (what kind of switch? GigE? Fast Ethernet?), or having any clue what CPUs and operating systems were involved. But it makes sense that the last test would be faster, it wasn't limited by the bandwidth of a single link. – David Schwartz Apr 15 '12 at 11:02

3 Answers3

6

Limiting discussion to Ethernet (though other link+physical setups have similar issues), you basically have 2 reasons why more clients = slower connection.

  1. Backplane speed limitations.

    Even though each port is designed for 100Mbps, and the switch can probably process 100Mbps from one port to another, only very expensive switches have full-mesh backplanes (which means a dedicated full-speed channel from each port to each other port).

  2. Collision avoidance.

    The more clients that are communicating (especially when they need to issue broadcasts), the higher the likelihood that two stations will transmit at the same time increases. When this happens, each chooses a random amount of time to wait before retransmitting. With a busy network, it could take multiple tries to get a single frame onto the wire.

bonsaiviking
  • 4,420
  • 17
  • 26
  • Wouldn't #2 not be true on full duplex links using a switch? – gparent Apr 15 '12 at 04:07
  • 2
    gparent: Not usually, no. They would just both transmit and the switch would send them, one after the other, to the destination port. But not all switches are equally good at doing this. – David Schwartz Apr 15 '12 at 12:13
3

Even though the switch may have e.g. 24 x 100 mbps ports it does not necessarily have 2400 mbps throughput capacity.

You are most likely hitting the throughput barrier of your switch.

Frands Hansen
  • 4,657
  • 1
  • 17
  • 29
  • I conducted an experiment and I set up a multi-threaded receiver across a wired network. I increased the number of machines up to 15 that simultaneously sent data and the overall data rate slowed down. I then tested a different method by setting up a pair of computer, so instead of 15 machines going to 1, it was 15 machines each sending to another 15. My results showed that on average the 2nd experiment was a lot faster. I'm rather confused as to explaining this? Any ideas? – Ed Briscoe Apr 15 '12 at 01:12
  • *average throughput was a lot higher I mean – Ed Briscoe Apr 15 '12 at 01:12
3

Transmission will be constrained by the slowest link. Assuming all devices are 100mbps capable and running full duplex:

  • 10 clients to one server may have a 10 Mbs average transfer rate for each client (constrained by servers link). Overall data rate will not exceed the 100 Mbs transfer rate of the server's link.
  • 10 pairs of hosts may have 100Mbs transfer rate for each pair (slower rates may be a result of capacity limits on the switch). Overall data rate will not exceed the capability of the switch, but could be as high as 1 Gbps.

Other factors normally constrain the data rate. Data transfer rates will be the slowest of the rate the source can provide the data and the rate the target can consume the data. Switches may not be able to transfer data between all ports at the full rate.

In most configurations many ports transfer data at rates well below the speed of the link. Faster links may still be desirable as the link latency will be lower. A 1200 byte packet will take roughly 1 millisecond per hop on 10 Mbs links, 0.1 millisecond on a 100 Mbs link and, only 0.01 millisecond on a 1 Gbs link. There will be additional latency in transfer due to buffering, distance, and speed of transfers within devices.

BillThor
  • 27,737
  • 3
  • 37
  • 69