Suppose you are using a bit set or something similar, essentially some object that allows you to access the value of individual bits. It may be something simple like an integer word or an array of bytes, or something more generic like a BitSet in Java, depending on the number of bits you want to handle.
My question concerns transforming the length of the useful bits into a length expressed as a number of bytes. This is virtually always required because you typically can't allocate less than 8 bits (1 byte) of memory, and so you end up with extra padding bits in your "bit-set" object.
So, to sum things up, how do you correctly get the size in bytes necessary to accommodate a given size in bits?
NOTE: Take into consideration potential integer overflows that may result in an incorrect answer. For example, n_bytes = (n_bits + 7) / 8
may result in an integer overflow if n_bits
is large enough.