0

I know that JS Number types are IEEE754 double precision floating points. Given this fact, how does JS perform bitwise operations?

-1 >>> 1
=> 2147483647

Is it merely simulating bitwise operations programmatically or does the language actually have a special treatment for bitwise operations, like when using bitwise, load numbers on registers as Int32 bit pattern, etc.?

I'm not nit picky about performance, but given that bitwise operations are known to be efficient and often used for that reason, I'm wondering about the internals here.

funct7
  • 3,407
  • 2
  • 27
  • 33
  • Yes, it just converts them (u)int32 – Bergi Nov 28 '19 at 22:44
  • @Bergi Converts them where? – funct7 Nov 28 '19 at 22:46
  • 1
    In the specified semantics of the language… The operands are evaluated, coerced to numbers, those are converted to integers, the integers are operated on, the result integer is converted back to a number and returned as the expression result. – Bergi Nov 28 '19 at 22:49
  • Of course how an actual engine does implement these semantics is a totally different question. It might never have started with a floating point number, representing it as an integer internally in memory already as long as it doesn't leave the integer range. The optimisation possibilities are endless. – Bergi Nov 28 '19 at 22:51
  • So it seems like the real answer to this question is "it depends on the implementation of the interpreter." I was actually wondering about that. Thanks. – funct7 Nov 28 '19 at 22:52
  • It just seemed a little weird coming from a strongly-typed language background that you can perform bitwise operations on a double. – funct7 Nov 28 '19 at 22:53
  • It's well-defined how the bitwise operators need to behave so all implementations will have the same results, but yes, if you are looking at the register level it always depends on the interpreter/compiler (and even processor architecture), just like in any other language higher than assembly. – Bergi Nov 28 '19 at 22:58
  • Well, given that bitwise operators are supposed to (or maybe expected to) operate on bit patterns, I don't think it's weird to think that what is known to be a double-precision format performs bit-shifts like it's an int32. You can't expect exponent bias bit patterns' bit-shift to give any meaningful numeric results like a two's complement bit pattern does. – funct7 Nov 28 '19 at 23:04

0 Answers0