I am translating a code from MATLAB to Python and I have run into trouble with multiplication.
I am writing some Bayesian econometric simulations and there is a lot of matrix multiplication. Some of the operations will yield a "scalar" at the end - a = [[6]]
, some, however, will yield a vector a = [[3],[2]]
. Now, this number gets used later in another matrix (or vector) multiplication, but sometimes it would need to be taken as a scalar. However, it will get used as a matrix and throw an error since the dimensions will not match.
The problem is, I can not predict which expression will result in a scalar and which will remain a vector or a matrix. That depends on the input.
One of the equations looks something like this:
beta_1 = V_1 * (inv(V_0) * beta_0 + t(X) * X * b_OLS);
And for some specifications, the b_OLS
can be a vector, in others it can be a scalar.
I might write it so that it works nicely for one script for the right constellation, but I need that function to be robust.
I have tried to create my own function for matrix multiplication that checks the input:
import numpy as np
def multiply(a, b):
if type(a) is np.ndarray and type(b) is np.ndarray:
if len(a.shape) == 2 and len(b.shape) == 2:
if a.shape != (1,1) and b.shape != (1,1):
return np.dot(a,b)
else:
return np.multiply(a,b)
else:
print("Wrongly specified matrix or vector.")
else:
return np.multiply(a,b)
But this relies on the fact that EVERYTHING there is will be either int
or numpy.ndarray
which is a bit fragile and does not seem to work correctly anyway.
I will appreciate any advice regarding matrix multiplication.