Some answers on stackoverflow suggest to use a ndarray of ndarray, when working with data in which the number of elements per row is not constant (How to make a multidimension numpy array with a varying row size?).
Is numpy optimized to work on a structure like that (array of arrays, also called nested arrays) ?
Here's a simplified example of such a structure:
import numpy as np
x = np.array([1,2,3])
y = np.array([4,5])
data = np.array([x,y],dtype=object)
It's possible to do operations like:
print(data+1)
print(data+data)
But some operations would fail like :
print(np.sum(data))
What's happening behind the scenes with this type of structure ?