I have a Numpy array with data type float64. For example:
[1.35623 , 2.35124 , 5.12276, 0.12466]
. How can I change the data type of a single element of this array. I have tried using astype()
, but it seems to return a scalar number rather a numpy array. How can I do this in Python 3.x?

- 419
- 7
- 23
-
3It's not possible in numpy. Numpy only allows all elements of single type. – Sunnysinh Solanki Feb 27 '18 at 15:46
-
Please describe better what you are trying to achieve. NumPy arrays can only hold elements of a single data type. Show what you have tried, what you are getting and what you would like to get. – jdehesa Feb 27 '18 at 15:46
-
2Possible duplicate of [Store different datatypes in one NumPy array?](https://stackoverflow.com/questions/11309739/store-different-datatypes-in-one-numpy-array) – Georgy Feb 27 '18 at 15:46
1 Answers
You can only get an approximation to this by creating an object
array, similar to a C pointer array holding references to heterogeneously typed objects. It doesn't change the fact that the array itself holds elements of the same type.
However, this is not particularly efficient, and it's rather contrary to the goal using numpy.ndarray
s in the first place.
If you prefer, you can still do something like this:
In [1]: import numpy
In [2]: a = numpy.array([1.35623 , 2.35124 , 5.12276, 0.12466])
In [3]: a.dtype
Out[3]: dtype('float64')
In [4]: b = numpy.array(a, dtype=object)
In [5]: b
Out[5]: array([1.35623, 2.35124, 5.12276, 0.12466], dtype=object)
In [6]: b[0] = 1.0j
In [7]: b
Out[7]: array([1j, 2.35124, 5.12276, 0.12466], dtype=object)
However, vectorization-style computation (for example with numpy
's overloaded operators and ufunc
s) will not benefit from the locality of contiguous arrays holding the numerically-typed objects. The actual numeric objects could be sparsely located in memory, rendering access and computation inefficient. In addition, some ufunc
s may simply break.

- 10,692
- 3
- 31
- 47