In your example:
In [970]: ind_var.shape
Out[970]: (10, 2)
In [971]: R.shape
Out[971]: (2, 2)
In [972]: ind_var[0,:]*R*ind_var[0,:]+Ve
Out[972]:
array([[ 0.001, 0.001],
[ 0.001, 0.001]])
For arrays, the *
multiplication is element by element, like MATLAB .*
. So the result is shape of R
, and the wrong size to put in a cell of Q
.
There is an array matrix multiplication, np.dot
:
In [973]: np.dot(ind_var[0,:], np.dot(R, ind_var[0,:]))+Ve
Out[973]: 0.001
There is an array subclass, np.matrix
that is constrained to be 2d (like old MATLAB) and uses *
for matrix product
In [981]: Rm=np.matrix(R)
In [982]: ind_m=np.matrix(ind_var)
In [983]: ind_m[0,:]*R*ind_m[0,:].T+Ve
Out[983]: matrix([[ 0.001]])
np.einsum
is a generalization of np.dot
that can perform all calculations in one step
In [985]: np.einsum('ij,jk,ik->i', ind_var, R, ind_var)+Ve
Out[985]:
array([ 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001, 0.001,
0.001, 0.001])
R
and ind_var
values are all 0 in this example, so the results aren't diagnositic - except for shape.
I was going to suggest the new matmul
operator, @
but ind_var@R@ind_var.T
produces a 10x10 array, which is not what we want. The iterative ind_var[0,:]@R@ind_var[0,:]
is ok.
(I really should test things with nontrivial values).