I see u8 u16 u32 u64 data types being used in kernel code. And I am wondering why is there need to use u8
or u16
or u32
or u64
and not unsigned int
?
-
7Because that will only map to *one* of your list. And you can't be sure which one. – Jongware Jun 17 '15 at 15:51
-
1I've mostly seen the standard typedefs `uint8_t`, `uint16_t`, et al. – Keith Thompson Jun 17 '15 at 15:54
-
I have linux/types.h include – Jun 17 '15 at 15:56
-
what is the shorthand of signed 32? – Gábor Mar 19 '22 at 13:20
3 Answers
Often when working close to the hardware or when trying to control the size/format of a data structure you need to have precise control of the size of your integers.
As for u8
vs uint8_t
, this is simply because Linux predated <stdint.h>
being available in C, which is technically a C99-ism, but in my experience is available on most modern compilers even in their ANSI-C / C89 modes.

- 9,052
- 6
- 38
- 56
-
12
-
18True... but it gets super annoying when you need to mix & match libraries and everyone tries to define their own known-wdith types so. You'll have `U8`, `u8`, `uint8`, `BYTE`, `UINT8` and `unt8_t` all in the same file. You can even have potential conflicts that ends up generating warnings. Most commonly with the 32 and 64-bit types which may have multiple valid ways to typedef them on a given platform. For new code, please, please, please just stick with stdint.h types :). – Brian McFarland Jun 17 '15 at 16:19
-
I am using `u8` `u16` etc in kernel space with `linux/types.h`. Is `stdint.h` for user space? – Jun 17 '15 at 17:09
-
3You can use the standard names (`uint8_t`, etc) in kernel mode too. Typedefs for those are in `linux/types.h` as well. – Gil Hamilton Jun 17 '15 at 17:28
-
2@BrianMcFarland as long as `u8` etc are your *own* typedefs that you can guarantee are identical to the standard ones, you'll be fine though. As Gil Hamilton said, [the kernel does this](http://lxr.free-electrons.com/source/include/linux/types.h) – o11c Jun 17 '15 at 18:50
Adding my 10 cents to this answer:
u64
means an 'unsigned 64 bits' value, so, depending on the architecture where the code will run/be compiled, it must be defined differently in order to really be 64 bits long.
For instance, on a x86 machine, an unsigned long
is 64 bits long, so u64
for that machine could be defined as follows:
typedef unsigned long u64;
The same applies for u32
. On a x86 machine, unsigned int
is 32 bits long, so u32
for that machine could be defined as follows:
typedef unsigned int u32;
You'll generally find the typedef
declaration for these types on a types.h
file which corresponds to the architecture you're compiling your source to.

- 501
- 7
- 17
Share what I learned about this question recently.
The reason why we need such explicitly sized type, such as u32, is that the normal C data types are not the same size on all architectures.
The following image shows that long integers and pointers feature a different size on various platforms.
In this way, u32 can guarantee that you get 4 bytes long integer.

- 2,418
- 8
- 35
- 62