The short answer is that the x86 family of processors were all designed to be backward compatible. The logic circuits that perform arithmetic and read/write operations in new CPUs are still capable of carrying out instructions designed for older CPUs while also carrying out newer instructions like 64-bit add and subtract.
If you want more history...
The x86 instruction set dates back to the mid-1970s, beginning with Intel's first 16-bit processor, the 8086. The general-purpose 16-bit (2-byte) registers on this CPU were called AX
, BX
, CX
, and DX
. The 8086 also allowed access to the high and low bytes of each register. For example, you could access the lower 8 bits of AX
using the name AL
, or the higher 8 bits using AH
.
When Intel started developing new processors with new features, they wanted them to be backward compatible with the 8086 and any processors that came afterward. Next in line came the 80186, the 80286, and the 80386--the last of which was their first 32-bit processor.
Of course, all the registers on the 80386 had to be 32-bit, but it also had to be backward compatible with older x86 processors. So rather than replace the registers, Intel merely extended the existing ones to EAX
, EBX
, ECX
, ...etc. (E
meaning "extended"). The AX
register is merely the lower 16 bits of theEAX
register, and is still accessible.
The same logic was followed with Intel's first 64-bit processor; the 32-bit EAX
was extended to the 64-bit RAX
and so on. The current x86-64 assembly language can still perform arithmetic operations on 32-bit registers using instructions like addl
, subl
, andl
, orl
, ... etc, with the l
standing for "long", which is 4 bytes/32 bits. 64-bit arthimetic is done with addq
, subq
, andq
, orq
, ...etc, with q
standing for "quadword", which is 8 bytes/64 bits.
EDIT: This pdf looks like it gives a good introduction to the differences between 32-bit and 64-bit x86 architectures.