0

I don't understand why the result 0b0010&0b0011001100110011001100110011001100110011001100110011001100110011 is 0 in javascript, The expected result is 0b0010 which is 2 in decimal.

Thanks.

wukong
  • 2,430
  • 2
  • 26
  • 33

3 Answers3

3

The TL;DR practical advice to take away here is that bitwise in JavaScript is limited to 32-bit signed ints, and so you shouldn't attempt operating on integers longer than 31 bits. But if that is not satisfying, read on to understand what actually causes this specific behaviour...

There are a couple of things worth understanding before trying to grok the exact mechanism of this behaviour:

1. Binary is not binary

The binary notation 0b is a Numeric Integer Literal just like 0x, it is not a raw binary data literal. The implication being: it is only interpreted as a number, whereas the type of variable being defined will determine the actual underlying bit format (i.e float vs int). The significance of this is that different types exhibit different behaviour at their limits, such as when defining numbers too large or with too much precision to fit.

2. Bitwise is for ints

JavaScript numbers are 64-bit floats AKA "doubles", yet bitwise operations are only valid on ints. This is because int bit formats are essentially the same as binary numeric literals; whereas floats are very different, they separate numbers into exponents and significands. To work around this JavaScript casts numbers into 32-bit signed ints (which can be fully represented within 64-bit floats) before performing bitwise operations.

JavaScript bitwise beyond 53-bits

When sticking to the limits of 32-bit signed ints (31-bit integers with a sign) bitwise operations will work exactly as expected, and when going above 31-bits you can expect the bits above 31 to be lost just like in any other language. That is until you push past the 53rd bit... then you can expect some less obvious behaviour because past this limit the floating point format starts interfering with it (and not by merely masking the upper bits):

Unlike ints, floats always preserve the most significant bits when being assigned a number that is too long to fully represent rather than masking them off. It does this because it is more numerically valid, and it can do this because it stores the order of magnitude of the number separately in the exponent which allows it to shift the number so that it's most significant bits fit into the 52-bit significand window and round off the lower bits.

... Don't run away yet! I have diagrams!

So here in this unnecessarily detailed diagram is the full journey of your numeric binary literal input, it's conversion into a 64-bit float (and loss of 9 bits of precision), and conversion into a 32-bit signed int. Note that this is purely conceptual and not representative of the actual mechanisms.

    MSB (implicit 1)                                              
    │                                                             
 +0011001100110011001100110011001100110011001100110011001100110011 <─ num
 │   └────────────────────────┬─────────────────────────┘└───┐    
 │                            │                            round  
 │                            │                            carry  
 │   1001100110011001100110011001100110011001100110011010 <──┘    
sign └──────────────────────┐          │                          
  │     1084 = (2^10 - 1) + 61         │                          
  │     │                              │                          
  │     │ [ 11-bit exponent ]          │  [ 52-bit significand ]  
  │┌────┴────┐┌────────────────────────┴─────────────────────────┐
  0100001111001001100110011001100110011001100110011001100110011010 <─ f64
  │└────┬────┘└────────────────────────┬─────────────────────────┘
  │     │                              │                          
  │     │                              │                          
  │     1084 - (2^10 - 1) = 61 ────────────── 61 - 52 = 9 ───┐    
sign                                   │                   shift  
 │   1001100110011001100110011001100110011001100110011010 <──┘    
 │                            │                                   
 │                            │                                   
 │   ┌────────────────────────┴─────────────────────────┐         
 +0011001100110011001100110011001100110011001100110011010000000000 <─ num
 │  │                              └──────────────┬──────────────┘
 │  MSB (implicit 1)                              │               
 │                                                │               
 │                                                │               
 └───────────── sign ─────────────┐            truncate           
                                  │               │               
                                  │               │               
                                  │               │               
                                  │┌──────────────┴──────────────┐
                                  00110011001100110011010000000000 <─ s32

Notice that the most significant digit in your input is in the 62nd place, which is 9 above the maximum signifiand representable in 64-bit float (53), and that has caused exactly 9 of the lowest position digits in the output to be zeroed, (the 9th rounded and carried).

Thomas Brierley
  • 1,187
  • 1
  • 7
  • 8
  • I updated this comment because the original digressed too much into how it's not possible to set all of the underlying bit combinations in float-64 with integer literals due to their lack of negative exponents... but this really was not very relevant considering that it's the _numerical_ representation that is cast to the 32-bit int. Additionally I've added explanation and diagram of exactly why and how the low bits are zeroed. This should read as a more cohesive explanation if a little verbose - It was a good exercise for me to flesh out all the details at least. – Thomas Brierley Jul 28 '18 at 01:26
  • plus one for ascii art – Joe DF Aug 14 '18 at 17:40
1

The number in question cannot be represented exactly:

console.log(Number("0b0011001100110011001100110011001100110011001100110011001100110011").toString(2));

As you can see the rightmost 01100110011 has become 10000000000. This number &0b0010 equals 0. For the purpose of bitwose operations that number is also taken modulo 232 during ToInt32 abstract operation, but that's responsible for the loss of the most significant bits:

console.log(("0b0011001100110011001100110011001100110011001100110011001100110011" | 0).toString(2));

This number &0b0010 of course also equals 0.

Specification: https://www.ecma-international.org/ecma-262/#sec-binary-bitwise-operators-runtime-semantics-evaluation

Andrew Svietlichnyy
  • 743
  • 1
  • 6
  • 13
0

because the second number is too long for the type to contain. It works up to the following length in your example:

0b0010 & 0b0011001100110011001100110011001100110011001100110011

which is 3 iterations of your 0011 shorter than what you were testing with

ian_stagib
  • 148
  • 12
  • Yes, 0x2&0x3333333333333 is 2 indeed, which 3 is repeated 13 times. 0x2&0x3333333333333333 is zero, 3 repeated 16 times. why? – wukong Jul 25 '18 at 13:01