I have two 2-dimensional numpy arrays of shape say m x d, and n x d. What is the optimized(i.e. without for loops) or the numpy way of creating the squared term of a Gaussian kernel for ultimately having a covariance matrix of size m x n.
I have already checked numpy's outer function, but it didn't serve my purpose.
This is what an equivalent code with for looks like
difference_squared = np.zeros((x.shape[0], x_.shape[0]))
for row_iterator in range(difference_squared.shape[0]):
for column_iterator in range(difference_squared.shape[1]):
difference_squared[row_iterator, column_iterator] = np.sum(np.power(x[row_iterator]-x_[column_iterator], 2))