A real question that I've been asking myself lately is what design choices brought about x86 being a little endian architecture instead of a big endian architecture?

- 328,167
- 45
- 605
- 847

- 911
- 1
- 9
- 6
-
1Design tradeoffs. See http://en.wikipedia.org/wiki/Endianness#Optimization for a couple of examples. – Jim Mischel Mar 03 '11 at 19:42
3 Answers
Largely, for the same reason you start at the least significant digit (the right end) when you add—because carries propagate toward the more significant digits. Putting the least significant byte first allows the processor to get started on the add after having read only the first byte of an offset.
After you've done enough assembly coding and debugging you may come to the conclusion that it's not little endian that's the strange choice—it's odd that we humans use big endian.

- 24,725
- 16
- 62
- 87
-
26A side note: Humans mostly read numbers and only sometimes use them for calculation. Furthermore we often don't need the exact numbers when dealing with large quantities - taking that into account - big endian is a sensible choice for humans. – qff Aug 29 '12 at 03:22
-
3@qff Are you saying that in big endian, because one can read left to right from the most significant digit, it makes it easier to comprehend the order of magnitude of the number? What about for people who normally read from right to left? – L̲̳o̲̳̳n̲̳̳g̲̳̳p̲̳o̲̳̳k̲̳̳e̲̳̳ Feb 07 '13 at 17:01
-
31Arabic is written from right to left, which might explain the orientation of our numbering system: for them, it is indeed little endian! – isekaijin Jun 21 '13 at 19:12
-
-
@pyon, I try to check that now. google translate indeed translates numbers to Arabic the same as originals, but Arabic calculators (online and youtube video) shows numbers are entered starting with most significant digit. I'm confused... – Marisha Sep 29 '19 at 07:55
-
-
1@Marisha That's because the same mindless cultural-copy-paste that gave the European world endian-flipped Arabic numerals happened a second time when the Western world's invention of the modern digital calculator was reskinned for Arabic users. – mtraceur Nov 14 '19 at 19:21
-
4@Marisha Note the historical ordering: 1) Arabic numerals invented, little-endian; 2) European cultures copy-paste Arabic number and math notation without adjusting for opposite written language direction, causing endian-flip; 3) modern American culture derives from European culture, including the big-endian number notation; 4) calculators are invented, and their user-interface and input style (most-significant number first) becomes normalized, first in the American and European world; 5) modern calculators spread into the Arabic world; most people involved take the input order for granted. – mtraceur Nov 14 '19 at 19:26
-
5@pyon Joking or not, that little fact can lead to a lot of deep insight - about history, about user interfaces, about how things can spread without the adjustments that would've made sense or been optimal in isolation because of external factors like backwards or cross compatibility, about how people find things intuitive mostly because they've spent enough time warping their mind to it by getting used to it rather than any innate reason, and about how people often tolerate counter-intuitive aspects of de-facto "standard" interfaces enough that we don't hear about it nor see change. – mtraceur Nov 14 '19 at 19:37
-
2@mtraceur But... "Arabic" numerals were not invented by the Arabs, they were invented in India, where most scripts are left-to-right, including the Sanskrit texts which first used with Hindu numerals. That makes your "historical ordering" quite ahistorical, or at least misleadingly incomplete. – Michal Jul 07 '20 at 05:07
-
@Michal Fascinating. Good to know. I'd be curious to learn if the order of digits relative to language direction stayed the same or changed in that transition. But this doesn't actually undermine my point at all. If there were two cross-language exports of those numerals, then it just reinforces the greater point I'm making, even if my historical ordering is incomplete. (Also note the context: "my" historical ordering comment is just making explicit the ordering that was *already presented by other prior comments*, because another comment seemed to be missing what that ordering would imply). – mtraceur Jul 08 '20 at 18:44
-
If little endian is so natural, then why did so many earlier computers use big endian? – Nate Eldredge Sep 26 '20 at 23:54
-
+1 because I finally see at least one theoretical advantage of using little endian: arithmetic with numbers larger than one word. But this'd be quite niche, right? Does it ever help with numbers/data `<=` one word? Also, if choosing little endian architecture, ie: `number = [LSB] ... [MSB]` (LSB = least significant ***byte***), wouldn't it have all those advantages if `[LSB] = [LSb] ... [MSb]`, where LSb = least significant ***bit***? ie. If you're gonna pick an endianness, why be inconsistent about it? – Elliott Aug 19 '21 at 00:27
-
@mtraceur, You forgot historical ordering: 0) India invents numerals that get borrowed by the Arabic world. Although I don't know what order they originally used. – SO_fix_the_vote_sorting_bug Feb 24 '22 at 19:02
This is quite archeological, but it most likely was not Intel's choice. Intel designed processors with backward compatibility a primary concern, making it easy to mechanically translate assembly code from the old to the new architecture. That turns the clock back from 8086 down to 8080 to the first micro-processor where endianness mattered, the Intel 8008.
That processor was started when CTC (later named DataPoint) came to Intel to ask for help with their data terminal product. Originally designed by Victor Poor and Harry Pyle, it had a logical processor design in MSI (many chips). They asked Intel to provide them with a storage solution, using 512 bit shift registers.
That was not Intel's favorite product, they took on these kind of custom design jobs to survive the ramp-up time for their 1024 bit RAM chip. Tedd Hoff, Stan Mazor and Larry Potter looked at the design and proposed an LSI processor with RAM.instead. That eventually became the 8008. Poor and Pyle are credited with designing the instruction set.
That they chose little-endian is credible from this interview with Poor. It skips through it rather quickly and the interview is rather scatter-shot but the relevant part on page 24:
Shustek: So, for example, storing numbers least significant byte first, came from the fact that this was serial and you needed to process the low bits first.
Poor: You had to do it that way. You had no choice.
The "had no choice" remark is odd, that appears to only apply to the bit-serial design of the MSI processor. Also the reason they shopped for shift registers instead of RAM. It comes up again at page 34:
Hendrie: Do you remember any of the push backs from them about the design or any of those details...
Poor: One of them was the one bit versus 8-bit. They wanted an 8-bit wide part and, ultimately, that's how they built it.
Poor: But it was still recirculating. But, you see, there are interesting points whether there's going to be a big end or a little end part could have been changed at that point but we didn't. We just left it...
Hendrie: Even after you went to eight bits?
Poor: Right. Which is why the Intel product line is that way today
Stan Mazor of Intel, who worked on the designs 4004 and 8008, elaborates on the "push back" in Oral History Panel on Intel 8008 Microprocessor:
And lastly, the original design for Datapoint... what they wanted was a [bit] serial machine. And if you think about a serial machine, you have to process all the addresses and data one-bit at a time, and the rational way to do that is: low-bit to high-bit because that’s the way that carry would propagate. So it means that [in] the jump instruction itself, the way the 14-bit address would be put in a serial machine is bit-backwards, as you look at it, because that’s the way you’d want to process it. Well, we were gonna built a byte-parallel machine, not bit-serial and our compromise (in the spirit of the customer and just for him), we put the bytes in backwards. We put the low- byte [first] and then the high-byte. This has since been dubbed “Little Endian” format and it’s sort of contrary to what you’d think would be natural. Well, we did it for Datapoint. As you’ll see, they never did use the [8008] chip and so it was in some sense “a mistake”, but that [Little Endian format] has lived on to the 8080 and 8086 and [is] one of the marks of this family.
So, Intel wanted to build byte-parallel CPU with 8 separate pins for accesses to data bus. The reason why Intel insisted on the compromise is explained in "Intel Microprocessors: 8008 to 8086" by Stephen P. Morse et all:
This inverted storage, which was to haunt all processors evolved from 8008, was a result of compatibility with the Datapoint bit-serial processor, which processes addresses from low bit to high bit. This inverted storage did have a virtue in those early days when 256 by 8 memory chips were popular: it allowed all memory chips to select a byte and latch it for output while waiting for the six high-order bits which selected the chip. This speeded up memory accesses.
Ultimately CTC did not use the 8008, it was finished a year too late and they had already implemented the MSI processor by then. The micro-processor design was certainly CTC's intellectual property, they however traded the rights to it with Intel for the design cost. Bit of a mistake :) Law suits about patent rights followed later.
So, as told, Intel ended up with little-endian because of the way serial ports worked.

- 819
- 9
- 15

- 922,412
- 146
- 1,693
- 2,536
-
Hmm, that's a very good answer, I could properly say the serial port works as serial and it is just waste of CPU to traverse in every operation if the case is big-endian. I think I understand correctly, so the second question, is still little-endian is a requirement ? I think serial ports are legacy now, Isn't it ? Or still is there any other reason which stricts to design as little-endian ? – FZE Mar 28 '16 at 13:32
-
5It is certainly a requirement to stick with the endianness choice to keep programs compatible across architecture changes. Serial ports are not exactly legacy yet, still common in embedded designs and many USB and BlueTooth device vendors expose an interface to their device through the serial port api by emulating a serial port in their device driver. – Hans Passant Mar 28 '16 at 13:45
-
-
4Serial buses in general made a come-back in the last decade: parallel ports and everything else replaced by USB, PCI replaced by PCI-express (every lane is a separate serial link), IDE and SCSI replaced by SATA and SAS. HDMI and DisplayPort are also serial protocols, IIRC. RS232 might be obsolete, but serial in general is not by any means. IDK what the endian choices are for any of the serial protocols I mentioned, though. – Peter Cordes Mar 28 '16 at 15:18
-
5I think the above comments are confusing serial ports and a serial processor. The Datapoint 2200 had a serial processor that operated on one bit at a time, with a 1-bit bus, a 1-bit ALU, and serial shift-register memory. This is unrelated to serial ports. – Ken Shirriff Oct 17 '20 at 21:39
It reflects the difference between considering memory to always be organized a byte at a time versus considering it to be organized a unit at a time, where the size of the unit can vary (byte, word, dword, etc.)

- 232,168
- 48
- 399
- 521
-
6The question isn't why endianness is a thing, it's why Intel picked little-endian instead of the more common(?) at the time big endian. – Peter Cordes Jan 27 '19 at 01:44