128-bit computing
In computer architecture, 128-bit integers, memory addresses, or other data units are those that are 128 bits (16 octets) wide. Also, 128-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers, address buses, or data buses of that size.
Computer architecture bit widths |
---|
Bit |
Application |
Binary floating-point precision |
Decimal floating-point precision |
General home computing and gaming utility emerged at 8-bit —but not at 1-bit or 4-bit— word sizes, as 28=256 words, a natural unit of data, become possible. Thus, early 8-bit CPUs (Zilog Z80, 6502, Intel 8088 introduced 1976–1981 by Commodore, Tandy Corporation, Apple and IBM) inaugurated the era of personal computing. Many 16-bit CPUs already existed in the mid-1970s. Over the next 30 years, the shift to 16-bit, 32-bit and 64-bit computing allowed, respectively, 216=65,536 unique words, 232=4,294,967,296 unique words and 264=18,446,744,073,709,551,615 unique words respectively, each step offering a meaningful advantage until 64 bits was reached. Further advantages evaporate from 64-bit to 128-bit computing as the number of possible values in a register increases from roughly 18 quintillion (1.8×1019) to 340 undecillion (3.40×1038) as so many unique values are never utilized. Thus, with a register that can store 2128 values, no advantages over 64-bit computing accrue to either home computing or gaming. CPUs with a larger word size also require more circuitry, are physically larger, require more power and generate more heat. Thus, there are currently no mainstream general-purpose processors built to operate on 128-bit integers or addresses, although a number of processors do have specialized ways to operate on 128-bit chunks of data, and uses are given below.