By "normal array" I take it you mean a NumPy array of homogeneous dtype. Given a recarray, such as:
>>> a = np.array([(0, 1, 2),
(3, 4, 5)],[('x', int), ('y', float), ('z', int)]).view(np.recarray)
rec.array([(0, 1.0, 2), (3, 4.0, 5)],
dtype=[('x', '<i4'), ('y', '<f8'), ('z', '<i4')])
we must first make each column have the same dtype. We can then convert it to a "normal array" by viewing the data by the same dtype:
>>> a.astype([('x', '<f8'), ('y', '<f8'), ('z', '<f8')]).view('<f8')
array([ 0., 1., 2., 3., 4., 5.])
astype returns a new numpy array. So the above requires additional memory in an amount proportional to the size of a
. Each row of a
requires 4+8+4=16 bytes, while a.astype(...)
requires 8*3=24 bytes. Calling view requires no new memory, since view
just changes how the underlying data is interpreted.
a.tolist()
returns a new Python list. Each Python number is an object which requires more bytes than its equivalent representation in a numpy array. So a.tolist()
requires more memory than a.astype(...)
.
Calling a.astype(...).view(...)
is also faster than np.array(a.tolist())
:
In [8]: a = np.array(zip(*[iter(xrange(300))]*3),[('x', int), ('y', float), ('z', int)]).view(np.recarray)
In [9]: %timeit a.astype([('x', '<f8'), ('y', '<f8'), ('z', '<f8')]).view('<f8')
10000 loops, best of 3: 165 us per loop
In [10]: %timeit np.array(a.tolist())
1000 loops, best of 3: 683 us per loop