The fastest (not sure if that's what you menat by "effective") way of doing this is probably something like
void char2bits1(unsigned char c, unsigned char * bits) {
int i;
for(i=sizeof(unsigned char)*8; i; c>>=1) bits[--i] = c&1;
}
The function takes the char to convert as the first argument and fills the array bits
with the corresponding bit pattern. It runs in 2.6 ns on my laptop. It assumes 8-bit bytes, but not how many bytes long a char is, and does not require the input array to be zero-initialized beforehand.
I didn't expect this to be the fastest approach. My first attempt looked like this:
void char2bits2(unsigned char c, unsigned char * bits) {
for(;c;++bits,c>>=1) *bits = c&1;
}
I thought this would be faster by avoiding array lookups, by looping in the natural order (at the cost of producing the bits in the opposite order of what was requested), and by stopping as soon as c is zero (so the bits array would need to be zero-initialized before calling the function). But to my surprise, this version had a running time of 5.2 ns, double that of the version above.
Investigating the corresponding assembly revealed that the difference was loop unrolling, which was being performed in the former case but not the latter. So this is an illustration of how modern compilers and modern CPUs often have surprising performance characteristics.
Edit: If you actually want the unsigned chars in the result to be the chars '0'
and '1'
, use this modified version:
void char2bits3(unsigned char c, unsigned char * bits) {
int i;
for(i=sizeof(unsigned char)*8; i; c>>=1) bits[--i] = '0'+(c&1);
}