I recently switched to R from matlab and I want to run an optimization scenario.
In matlab I was able to:
options = optimset('GradObj', 'on', 'MaxIter', 400);
[theta, cost] = fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);
Here is the equivalent of costFunctionReg (here I call it logisticRegressionCost)
logisticRegressionCost <- function(theta, X, y) {
J = 0;
theta = as.matrix(theta);
X = as.matrix(X);
y = as.matrix(y);
rows = dim(theta)[2];
cols = dim(theta)[1];
grad = matrix(0, rows, cols);
predicted = sigmoid(X %*% theta);
J = (-y) * log(predicted) - (1 - y) * log(1 - predicted);
J = sum(J) / dim(y)[1];
grad = t(predicted - y);
grad = grad %*% X;
grad = grad / dim(y)[1];
return(list(J = J, grad = t(grad)));
}
However when I try to run an optimization on it like:
o = optim(theta <- matrix(0, dim(X)[2]), fn = logisticRegressionCost, X = X, y = y, method="Nelder-Mead")
I get an error due of the list return. (When I just only return J it works)
Error:
(list) object cannot be coerced to type 'double'
Q1: Is there a way to specify which return should the optim use for the minimization? (like fn$J)
Q2: Is there a solution where I can use the gradient I calculate in the logisticRegressionCost?