I implemented a support vector machine in python using the cvxopt qp solver where I need to compute a gram matrix of two vectors with a kernel function at each element. I implemented it correctly using for loops but this strategy is computationally intensive. I would like to vectorize the code.
Example:
Here is what I have written:
K = np.array( [kernel(X[i], X[j],poly=poly_kernel)
for j in range(m)
for i in range(m)]).reshape((m, m))
How can I vectorize the above code without for loops to achieve the same result faster?
The kernel function computes a gaussian kernel.
Here is a quick explanation of an svm with kernel trick. Second page of this explains the problem.
Here is my full code for context.
EDIT: Here is a quick code snippet that runs what I need to vectorized in an unvectorized form
from sklearn.datasets import make_gaussian_quantiles;
import numpy as np;
X,y = make_gaussian_quantiles(mean=None, cov=1.0, n_samples=100, n_features=2, n_classes=2, shuffle=True, random_state=5);
m = X.shape[0];
def kernel(a,b,d=20,poly=True,sigma=0.5):
if (poly):
return np.inner(a,b) ** d;
else:
return np.exp(-np.linalg.norm((a - b) ** 2)/sigma**2)
# Need to vectorize these loops
K = np.array([kernel(X[i], X[j],poly=False)
for j in range(m)
for i in range(m)]).reshape((m, m))
Thanks!