I came across a piece of code to calculate the number of binary bits needed for a decimal.
nbits = 1 + (decimal and floor(log2(decimal)))
I understand that 1+floor(log2(decimal))
returns the number of nbits
.
However I'm not sure what the and
statement ensures here.