That's because Python sees a signed and an unsigned type and tries to automatically deduce the result type, which will be signed. But as the first 64-bit number was unsigned the signed version would need 65-bit. As there is no integer type in Python/Numpy higher than 64 bit, Python chooses float64
. The standard type i.e. for the divisor 3
is int64
, that's why the first example will be cast to float64
.
This also works with multplications of course:
python> import numpy as np
python> type( np.int64( 10 ) * np.int64( 1 ) )
Out[0]: numpy.int64
python> type( np.uint64( 10 ) * np.uint64( 1 ) )
Out[1]: numpy.uint64
python> type( np.uint64( 10 ) * np.int64( 1 ) )
Out[2]: numpy.float64
Note that this automatic type deduction only applies to different signed types, because it is value agnostic, else almost all types would have to end up as float64
, because e.g. after three consecutive multiplications it could be possible that it doesn't fit into uint64
anymore.
type(uint64(12345678900)*uint64(12345678900))
/usr/bin/ipython:1: RuntimeWarning: overflow encountered in ulong_scalars
#! /usr/bin/python
Out[3]: numpy.uint64
Note: Beware that in Python 3 the simple slash is the integer division by default anymore. Instead you would have to use 3 // 2
to get 1
as 3 / 2 == 1.5
in Python 3.