-1

Numpy is giving array containing list of 2 raised to natural numbers as negative values. How can 2 raised to positive numbers like 1000 be negative?

I have an array 'x' that we use to plot x-axis value from 1 to n. We take x**(2**x) i.e x raised to (2 raised to x) for each value in array x and use it as y axis value.

Case 1: For x ∈ [1, 50)

I have used the code below, and the output is proper as there are no negative values in the output of np.power(2, x).

x = np.array([x for x in range(1, 50)])

print(np.power(2, x))
x2x =  np.power(x, np.power(2, x)) 


plt.plot(x, x, label = 'f(n) = n')
plt.plot(x, x2x, label = 'f(n) = x**(2**x)')

plt.legend()
plt.show()

Output: No negative values in the output

Link to output

Case 2: For x ∈ [1, 100)

I have used the code below, and the output is having negative values in the output of np.power(2, x) and so np.power(x, np.power(2, x))

x = np.array([x for x in range(1, 100)])

print(np.power(2, x))
x2x =  np.power(x, np.power(2, x)) 


plt.plot(x, x, label = 'f(n) = n')
plt.plot(x, x2x, label = 'f(n) = x**(2**x)')

plt.legend()
plt.show()

Output: Negative values in the output

Link to output

If x is always positive and non-decreasing and 2 is constant and positive then why is 2 raised to positive number getting negative output in numpy?

  • 1
    This sounds like simple integer overflow and wraparound. Your output isn't in the posting; how do the values in question compare with standard MAXINT values? – Prune May 24 '19 at 18:26
  • 1
    `np.int32(2) ** np.int32(31)` → -2147483648. `np.int64(2) ** np.int64(63)` → -9223372036854775808 – Nick T May 24 '19 at 18:31
  • even `np.int64` won't solve for every case: `np.int64(9801)**np.int64(99) -8755237408081528679` – C.Nivs May 24 '19 at 18:33
  • 2
    This is a known issue https://github.com/numpy/numpy/issues/10964 – Yi Bao May 24 '19 at 18:34

3 Answers3

0

The integers are likely overflowing. 32-bit integers can only contain values up to ~2.1b, and if it overflows it will wrap back around into the negatives.

Setting dtype=int64 may fix your issue. If not, just use python integers (dtype=object). They're not very efficient but they can hold as much data as your memory will allow.

Alec
  • 8,529
  • 8
  • 37
  • 63
0

You should use Python's power operator (**) for your purpose as even changing numpy dtype to int64 won't help when the size exceeds 64 bits. Whereas Python's power operator will suffice your purpose.

x**(2**x)
TylerH
  • 20,799
  • 66
  • 75
  • 101
0

There's 3 cases that you may be inquiring about.

Case 1: You just care about the most significant digits

Use floats. Cast everything to a double-precision float and it should be fine up to about 10300.

>>> 3 ** np.array([2, 10, 50, 100], dtype=np.float)
array([9.00000000e+00, 5.90490000e+04, 7.17897988e+23, 5.15377521e+47])

If you're exceeding that, only store the logarithm of the numbers, and use the respective math (adding logs = multiplication, multiplying logs = exponentiation), then you're fine up to ludicrously high numbers.

>>> math.log(3) * 50
54.93061443340549
>>> math.log(3) * 50 == math.log(3 ** 50) == math.log(717897987691852588770249)
True

Case 2: You only care about the least significant digits

I think this is called "finite field" or "Galois" math? Generally useful in cryptography and other things. Unfortunately, Numpy doesn't seem to have a power/modulo function, so you'd need to roll your own or try a workaround (the ones there only work if it doesn't overflow in a single op, so may not work again).

>>> [pow(3, n, 2 ** 16 + 1) for n in [2, 10, 50, 100]]  # modulo some random prime
[9, 59049, 12911, 33330]

Case 3: You need all the digits

You must use 'bigint' math, which Numpy does not provide. Python integers are the easy choice.

>>> 3 ** np.array([2, 10, 50, 100], dtype=np.object)
array([9, 59049, 717897987691852588770249,
       515377520732011331036461129765621272702107522001], dtype=object)
Nick T
  • 25,754
  • 12
  • 83
  • 121