11

I was explaining the difference between straight and crossover cables to my friend, and began wondering why on earth the early designers thought it was a good idea to use two different cable types?

Is it just some strange historical artifact which will carry on until proliferation of auto mdi/x or is there a technical reason for straight cables to exist?

edit: To specify, currently straight cables are only needed between computers and switches/hubs. Why weren't switches/hubs designed from the beginning to use crossover cables instead?

zokier
  • 113
  • 1
  • 6

7 Answers7

17

On an RJ-45 connector, there are 8 pins. Originally only 4 were used. Tx(transmit) and its ground, and Rx(receive) and its ground. If you used a straight through cable, the transmit pins would be connected to the transmit pins on the other device. The same would be true for the receive pins.

Early networking gear wasn't "smart" enough to know that data was coming in on pins that should be for data transmission, so it didn't listen there. Modern day GigE gear is smart enough so this is no longer an issue. This was never meant to be a design decision, but rather an answer to a previously made design decision.

Edit: To address your question left in the comment -

To simplify the wiring process (both ends could be the same), networking gear was designed with ports that were receiving on the pins that the PCs were transmitting on and vice versa. This made it so that the bulk of cables created could have both ends wired the same way. Since the use of a crossover cable is rare, even more so with the advent of "uplink" ports and auto-crossover on modern switches, this is the lesser used technology.

It really doesn't matter which wiring scheme is used, the problem would remain if the "standard" cable and pinning had been of the crossover variety. Then, what we call a straight through, cable would have had to be used to connect devices directly to each other.

MDMarra
  • 100,734
  • 32
  • 197
  • 329
  • Much better explanation than mine :) – Justin Drury Jul 13 '10 at 21:27
  • I understand the reason for crossover cables, it's quite clever way to simplify the electronics. But what I don't understand is why use straight cables instead of crossover ones with switches? – zokier Jul 13 '10 at 21:58
  • 1
    I suspect the answer lays in the mists of time, back when ARCNET was more common than Ethernet. But my google-fu is failing me at the moment. – sysadmin1138 Jul 13 '10 at 22:09
  • 3
    Not to put too fine a point on it and strictly for reference, but an ethernet cable uses an 8P8C connector, not an RJ45 connector. Also, before the days of Auto-MDIX, the rule was like devices needed a crossover cable (switch to switch, etc) and unlike devices needed a straight through cable (switch to host). – joeqwerty Jul 13 '10 at 22:32
  • I'd disagree on the point that the problem would remain if switches were designed for crossovers. I made quick flowchart pair to demonstrate the decision process for cable selection. On the left hand side is current system, and on right is what would be if switches used crossover cables. http://img695.imageshack.us/img695/1618/cableflow.png – zokier Jul 13 '10 at 23:00
  • 4
    @joeqwerty, the term 8P8C is generic and applies to any 8 pin plug and socket where the plug has all male connections and the socket has all female connections, whereas RJ45 defines the physical plug and socket itself. Ethernet cables require 8P8C connectors but don't have to use RJ45. However, the sockets we all know as being standard on NICs and switches are of that type. – John Gardeniers Jul 13 '10 at 23:43
  • 1
    @John: You're right but as it stands, an ethernet connection is an 8P8C connector, it may be used in other cabling implementations as well but it it is technically the correct terminology to describe an ethernet connector. An RJ45 connector on the other hand, is not technically the correct terminology. I'm not stating that an 8P8C is exclusive to ethernet, only that it is the technical name for the connector type used in ethernet network connectors. – joeqwerty Jul 14 '10 at 00:44
  • 2
    8P8C is a modular connector (computer) specification that came after the RJ45 (telephone) spec; the two are different (Wiki it if you doubt me). RJ45 defines the shape and 8 pin positions. 8P8C defines the shape, pins, and the number of used pins. Neither specify how/if the wires are connected. TIA/EIA-568-B defines the wiring for standard Ethernet cables. They category system (TIA-568-A for Cat5, TIA-854 for Cat6) the electrical quality of those wires and their physical arrangement. Joe is correct, and others have some valid points. – Chris S Jul 14 '10 at 02:58
6

Once upon a time a twisted pair socket was only wired one way and the attached electronics couldn't change what each wire did. You were either a network device (hub/bridge/switch/router) or an end device. In order to electrically connect two network devices together you need a different cable than one used to connect an end device to a network device.

And thus the straight-through and cross-over cables were born.

Avoiding the usage of a second cable type (that would invariably lose its label and confuse the bejebers out of some network person months/years down the road when they pull it out of the bin) most devices intended to connect to both network devices and end devices had an uplink port that allowed the use of 'normal' cables.

It was as simple as that.

Edit: Google-Fu successful. It WAS ARCnet!

Why weren't switches/hubs designed from the beginning to use crossover cables instead?

Back when the 10base-T specification was still under consideration, the twisted pair architecture most common at the time in office networks was ARCnet. 10base-T wasn't ratified as an actual standard until 1990, later than I thought. Connecting ARCNet hubs together looks to have required a cable with flipped pairs from what ended up connecting to endpoint devices.

Since the standards committee would have been made up of veteran network engineers from the various hardware vendors and other interested parties, they had been dealing with the multiple cable problem for years and likely considered it status-quo. It is also possible that the 'draft' devices under development by the vendors also had electrical requirements for the cable, influenced as they were by ARCnet device manufacture. Clearly the committee didn't consider the use of multiple cable types to be enough of a problem to standardize the practice out of existence.

sysadmin1138
  • 133,124
  • 18
  • 176
  • 300
  • 1
    I think that the thought process probably goes back even further to the whole DCE/DTE definition with serial cables. (is this the appropriate time for a "git off my lawn" comment???) – Peter M Jul 14 '10 at 13:32
  • I think you're right. It Was The Way It Was Done. Though Jeremy M's comment about crossover cables in the wiring closet makes a lot of sense as well. – sysadmin1138 Jul 14 '10 at 13:56
  • Yep, and my kit used to include DB15 to DB9 connectors, m/f gender changers as well as null modem cables and standard serial cables and RS232 break out blocks and ones with pretty lights to show what lines were active - all so you could get the correct number of tx/rx swap transitions across the serial link. – Peter M Jul 14 '10 at 14:39
3

The reason straight cables are used is because they are easier to manufacture, as both ends are the same. Cross-over cables were originally used when chaining hubs because they wanted the link port to be different to the other ports. You need to bear in mind that back then things could strange results, or even no results, if you didn't use the link ports as intended.

The next step was to provide a switch on the hubs so that you could use either straight or cross-over cables for chaining. These days it's all done with intelligent chips.

Of course we still need cross-over cables for directly linking most network devices without using switches or hubs, otherwise the two transmit ports would be connected together, as would the receive ports. The cross-over cable correctly connects transmitter to receiver.

John Gardeniers
  • 27,458
  • 12
  • 55
  • 109
1

In most situations, there will be several cables in the chain. Between the hub/switch and the patch panel in the wiring closet, the premise cabling between that patch panel and the wall port, and then between the wall port and the device using network. With straight through cabling, the number and type of these connections does not need to be considered for selecting cables. With crossovers in each place, one would need to be sure that there were an odd number of cables in place and things like adding a coupler to extend the cable would require extra thinking. In the odd case of needing to connect a switch to a wall port, just use a single crossover cable. For the backend, back the days of coax uplinks the crossover wasn't an issue, and the AUI-10BaseT uplink adapters had a MDI/MDI-X switch on them.

The same concept happens on fiber patches between closets. Most are wired straight through which makes things easier if patching directly through multiple junction points. At one end (hopefully up or down side is consistent across the environment), cross the A and B fiber to get a connection.

Jeremy M
  • 819
  • 4
  • 10
0

Routers and switches now are smart enough to not require one or the other. Only when going from pc -> hub or pc -> pc is one or the other required.

As to why they were required, and this is just what I remember, the computers transmit on 2 pairs and receive on 2 pairs, so in order to prevent collisions, you had to switch 2 of the pairs to connect two machines directly.

Justin Drury
  • 101
  • 2
  • To specify: Currently straight cables are only needed between computers and switches/hubs. Why weren't switches/hubs designed from the beginning to use crossover cables instead? – zokier Jul 13 '10 at 21:40
  • I think most modern NICs in PCs do auto-crossover as well, especially if they're gigabit as it's encouraged in the 1000BASE-T standard or draft or something to that matter. I know the fast ethernet (100BASE-T) port in my old laptop does it as well. – Oskar Duveborn Jul 13 '10 at 23:08
  • "...are only needed between computers and switches": isn't this by far most common scenario so you'd rather phrase it "crossover cables are only needed between computers..."? – Oskar Duveborn Jul 13 '10 at 23:10
  • @Oskar: You need crossover cables also between switches – zokier Jul 13 '10 at 23:31
  • Yes of course but it still feels like the exception, connecting the same devices together. Hubs and switches to connect together usually had separate uplink ports before auto-crossover took over completely anyway? – Oskar Duveborn Jul 14 '10 at 08:42
0

Once upon a time, all this was $$$$, I remember being very happy when ethernet got down near $100 a port, so it was to keep this all sane. Either user a crossover cable, or perhaps a physical switch, or two physical ports for the same logical port, one wired normally, one crossover (don't use both).

It used to be hard and expensive, so to keep things sane.

Ronald Pottol
  • 1,703
  • 1
  • 11
  • 19
0

Even when the early auto-sense Ethernet PHY transceivers became available I think a good proportion of the cheaper ones were not "seemless".

If the first packet on link-up was received on what had initially assumed and set to be the Transmit line then the TX/RX functions would be swapped over, and subsequent order was restored.

The drawback in some devices was that although the first packet sensed on TX would trigger the switch, it was otherwise "unreadable" as far as its packet contents were concerned and so would be dropped. Transceivers like this did rely on retransmission in the upper layers, as the first Received packet after link-up would be dropped if there was an initial TX/RX mismatch.

I think (hope) this effect is absent in modern auto-sense devices.