1

I'm trying to cast a big negative value inside a Cython class to an uint64_t type variable. But i keep getting this error:

OverflowError: can't convert negative value to unsigned long

cdef uint64_t temp2 = <uint64_t>(temp - bitReversal(current_pos))

The number i get from temp - bitReversal(current_pos) is -1152831344652320768 and if i hardcode it it works. For now i build a really ugly hack converting the negative number to the corresponding unsigned one but it is as expected really slow.

Alan Höng
  • 113
  • 2
  • 10
  • Change `` to `(uint64_t)`? (And accept the overflow.) – user2864740 Sep 18 '14 at 23:11
  • May I ask *why* you are trying to cast a negative number to an unsigned value? – Cory Kramer Sep 18 '14 at 23:12
  • I'm calculating a movement mask for a rook for a game similar to chess.`(uint64_t)` is not valid syntax for cython – Alan Höng Sep 18 '14 at 23:16
  • 1
    Are the two numbers you're subtracting both positive? Then just convert them both to `uint64_t`, then rely on the fact that C unsigned ints have guaranteed overflow behavior. (Doing signed arithmetic and then casting to unsigned is _not_ guaranteed; it relies on the fact that your platform's signed numbers are 2s-complement, implemented in the obvious way, and that your compiler doesn't make assumptions that they won't overflow—and that last part may not always be true on modern systems, even if the first two parts are.) – abarnert Sep 18 '14 at 23:17

1 Answers1

1

Thanks abarnert that worked. This Line made it work: cdef uint64_t temp2 = <uint64_t>(temp - <uint64_t>bitReversal(current_pos))

But it is really strange because both variables are of type uint64_t.

def bitReversal(uint64_t x):
    x = (((x & 0xaaaaaaaaaaaaaaaa) >> 1) | ((x & 0x5555555555555555) << 1))
    x = (((x & 0xcccccccccccccccc) >> 2) | ((x & 0x3333333333333333) << 2))
    x = (((x & 0xf0f0f0f0f0f0f0f0) >> 4) | ((x & 0x0f0f0f0f0f0f0f0f) << 4))
    x = (((x & 0xff00ff00ff00ff00) >> 8) | ((x & 0x00ff00ff00ff00ff) << 8))
    x = (((x & 0xffff0000ffff0000) >> 16) | ((x & 0x0000ffff0000ffff) << 16))
    cdef uint64_t result = <uint64_t>((x >> 32) | (x << 32))
    return result
Alan Höng
  • 113
  • 2
  • 10
  • 2
    Both variables can't have been `uint64_t`, or the subtraction wouldn't have come out negative. Could you be missing a `cdef` somewhere, so one of the variables got copied to a Python `int` somewhere along the way? In that case, I believe Cython will subtract them by converting the C value to Python and asking Python to subtract (which, in addition to this overflow problem, is likely also exactly the performance problem you're looking to avoid). – abarnert Sep 18 '14 at 23:52
  • Use `cython -a myfile.pyx` and look for yellow. If your math is yellow, abarnert is on the money. – Veedrac Sep 20 '14 at 11:35