From ISO/IEC 9899:
7.18.1.2 Minimum-width integer types
1 The typedef name int_leastN_t designates a signed integer type with a width of at least N, such that no signed integer type with lesser size has at least the specified width. Thus, int_least32_t denotes a signed integer type with a width of at least 32 bits.
Why I should ever use this types?
When I'm deciding what type I should take for a variable I need, then I ask my self: "What will be the biggest value it could ever be carrying?"
So I'm going to find an answer, check what's the lowest 2n which is bigger than that, and take the matching exact integer type.
So in this case I could use also a minimum-width integer type. But why? As I already know: it will never be a greater value. So why take something that could sometimes cover even more as I need?
All other cases I can imagin of where even invalid as i.e.:
"I have a type that will be at least size of..." - The implmentation can't know what will be the largest (for example) user input I will ever get, so adjusting the type at compile time won't help.
"I have a variable where I can't determine what size of values it will be holding on run time."
-So how the compiler can know at compile time? -> It can't find the fitting byte size, too.
So what is the usage of these types?