Math.min
and Math.max
are working just fine.
When you write 131472982990263674
in a JavaScript program, it is rounded to the nearest IEEE 754 binary64 floating-point number, which is 131472982990263680.
Written in hexadecimal or binary, you can see that 131472982990263674 = 0x1d315fb40b2157a = 0b111010011000101011111101101000000101100100001010101111010 takes 56 bits of precision to represent.
If you round that to the nearest number with only 53 bits of precision, what you get is 0b111010011000101011111101101000000101100100001010110000000 = 0x1d315fb40b21580 = 131472982990263680 (significant bits in bold).
Similarly, when you write 131472982995395415
in JavaScript, what you get back is 131472982995395410.
So when you write the code Math.min(131472982990263674, 131472982995395415)
, you pass the numbers 131472982990263680 and 131472982995395410 into the Math.min
function.
Given that, it should come as no surprise that Math.min
returns 131472982990263680.
> 131472982990263674
131472982990263680
> 131472982995395415
131472982995395410
> Math.min(131472982990263674, 131472982995395415)
131472982990263680
It's not clear what your original goal is.
Are you given two JavaScript numbers to begin with, and are you trying to find the min or max?
If so, Math.min
and Math.max
are the right thing.
Are you given two strings, and are you trying to order them by the numbers they represent?
If so, it depends on the notation you want to support.
If you only want to support decimal notation for integers (with no scientific notation, like 123e4
), then you can simply chop leading zeros and compare the strings lexicographically with <
or >
in JavaScript.
> function strmin(x, y) { return x < y ? x : y }
> strmin("131472982990263674", "131472982995395415")
'131472982990263674'
If you want to support arbitrary-precision decimal notation (including non-integers and perhaps scientific notation), and you want to maintain distinctions between, for instance, 1.00000000000000001 and 1.00000000000000002, then you probably want a general arbitrary-precision decimal arithmetic library.
Are you trying to do arithmetic with integers in a range that might exceed 2⁵³, and need the computation to be exact, requiring >53 bits of precision?
If so, you may need some kind of wider-precision or arbitrary-precision arithmetic than JavaScript numbers alone provide, like bigint recently added to JavaScript.
If you only need a little more than 53 bits of precision, as is often the case inside numerical algorithms for transcendental elementary functions, there's also T.J. Dekker's algorithm for extending (say) binary64 arithmetic into double-binary64 or “double-double” arithmetic: a double-binary64 number is the sum + of two binary64 floating-point numbers and , where typically holds the higher-order bits and holds the lower-order bits so together they can store 106 bits of precision.