I was wondering what is the reason behind branding a MCU as 32 bit or 64 bit. In the simplistic architecture like Harvard or Neumann architecture it used to be width of data bus. But in the market I have seen MCUs which have 64 bit data lines and yet marketed as 32 bit MCUs. Can somebody explain?
-
1Like most things these numbers come from marketing and as such can have any meaning they like to try to make a sale. No surprise that 32 bit processors (using the size of the register as the definition there) now have 64 bit data busses simply because that is where things are with memory. – old_timer Jan 08 '14 at 15:46
1 Answers
It is not true that the bit width of a processor was defined by the data bus width. Intel 8088 (used in the original IBM PC) was a 16bit device with an 8 bit data bus, and Motorola 68008 (Sinclair QL) was a 32bit device with an 8 bit bus.
It is primarily defined by the nature of the instruction set (width of operands) and the register width (necessarily the same).
When most devices had matching bus and instruction/register widths (i.e. prior to about 1980), there was no need for a distinction and that it was unclear whether it refered to bus or register/insttruction width was of little consequence, when narrow bus width bus versions of wide instruction/register devices were introduced it represented a marketing dilemma. The QL was widely advertised as having a 32 bit processor despite its 8 bit bus, while the 8088 was sometimes referred to as an 8/16 bit part. The 68008 could trivially perform 32bit operations in a single instruction - the fact that it took 4 bus cycles to get the operand was transparent to software, and the total number of instruction and data fetch cycles was still far fewer than it would take an 8 bit processor to perform the same 32 bit operation.
Another interesting architecture in this context is ARM architecture v4 that supports a 16 bit mode known as "Thumb" in addition to the 32bit ARM mode, In Thumb mode both the instruction and register set is 16 bit. This has higher code density than ARM mode. Where an external memory interface is used, most ARM v4 parts support both a 16 or 32 bit external bus - either ARM or Thumb may be used with either, but when a 16 bit bus is implemented, Thumb mode generally runs more efficiently than the 32 bit instruction set due to the single bus cycle per instruction or operand fetch.
Given the increasing variety of architectures instruction/register sets and bus widths, it makes sense now to characterise an architecture by its instruction/register set.

- 88,407
- 13
- 85
- 165
-
+1 for Sinclair QL! I remember many arguments about whether the 8088 (and the Motorola 6809...) were 8 or 16 bit processors. – Roddy Jan 08 '14 at 21:15
-
The 8088 is considered an 8/16 bit architecture. Studies performed around 1980 questioning whether IBM should have used the 8086 determined that the 8088 was only 20-25% slower than the 8086, so it was a good cost effective decision. Also see [this](http://forwardthinking.pcmag.com/chips/286228-why-the-ibm-pc-used-an-intel-8088). – wallyk Jan 08 '14 at 21:38
-
@wallyk: Both parts I mentioned were designed to allow reduced cost systems compared to their full width bus parts. The term "8/16" bit is probably what dwelch referred to as "marketing" in his comment. The 8088 has the same instruction and register set as 8086 and will run 8086 code unmodified. It is as I said a 16 bit processor with an 8 bit bus - that is what the "8/16" refers to, in software terms (which is what this site is primarily concerned with) it is certainly a 16 bit part. – Clifford Jan 08 '14 at 22:05