0

I am trying to fit a kernelized version of the Cox partial likelihood in R. I have a function

compute_kernelized_nLL(param_vect, kernel_matrix,response,lambda=0)

and when I call optim as follows:

ker.train<-construct_euclidean_kernel(as.matrix(data))
(res <-optim(par=rep(0,ncol(ker.train)),fn = compute_kernelized_nLL,
      kernel_matrix=ker.train,
      response=uncensored_survival,
      lambda=3,
      method="Nelder-Mead"))

I noticed that the result of doing this often converged to the initial parameter values passed. To check this I printed the parameter vector at the beginning of compute_kernelized_nLL and the parameters are indeed not changing - I just get a vector of zeros over and over again until all the parameters then start moving in lock step. This has happened no matter what optimization method I tried.

I know a minimal reproducible example is desired but after trying to replicate the behavior I couldn't find one. I'm happy to edit in more of the code but I didn't want to have a gigantic wall of text obscuring the question.

  • As initial chehcks: Does `compute_kernelized_nLL` return a length 1 `numeric`? What is the convergence code returned in `optim`? Usually the solution "not moving" indicates the gradient computed internally is null. – VFreguglia Oct 03 '19 at 00:00
  • @Freguglia yes it is a length 1 numeric. I had been getting convergence codes of 0 though on one iteration i got a code of 1 – WedgeAntilles Oct 03 '19 at 00:02

0 Answers0