I am writing a Javascript version of this Microsoft string decoding algorithm and its failing on large numbers. This seems to be because of sizing (int / long) issues. If I step through the code in C# I see that the JS implementation fails on this line
n |= (b & 31) << k;
This happens when the values are (and the C# result is 240518168576)
(39 & 31) << 35
If I play around with these values in C# I can replicate the JS issue if b
is an int
. And If I set b
to be long
it works correctly.
So then I checked the max size of a JS number, and compared it to the C# long result
240518168576 < Number.MAX_SAFE_INTEGER = true
So.. I can see that there is some kind of number size issue happening but do not know how to force JS to treat this number as a long.
Full JS code:
private getPointsFromEncodedString(encodedLine: string): number[][] {
const EncodingString = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789_-";
var points: number[][] = [];
if (!encodedLine) {
return points;
}
var index = 0;
var xsum = 0;
var ysum = 0;
while (index < encodedLine.length) {
var n = 0;
var k = 0;
debugger;
while (true) {
if (index >= encodedLine.length) {
return points;
}
var b = EncodingString.indexOf(encodedLine[index++]);
if (b == -1) {
return points;
}
n |= (b & 31) << k;
k += 5;
if (b < 32) {
break;
}
}
var diagonal = ((Math.sqrt(8 * n + 5) - 1) / 2);
n -= diagonal * (diagonal + 1) / 2;
var ny = n;
var nx = diagonal - ny;
nx = (nx >> 1) ^ -(nx & 1);
ny = (ny >> 1) ^ -(ny & 1);
xsum += nx;
ysum += ny;
points.push([ysum * 0.000001, xsum * 0.000001]);
}
console.log(points);
return points;
}
Expected input output:
Encoded string
qkoo7v4q-lmB0471BiuuNmo30B
Decoded points:
- 35.89431, -110.72522
- 35.89393, -110.72578
- 35.89374, -110.72606
- 35.89337, -110.72662