0

I'm using mle2 to estimate a parameters for a non-linear model and I want estimates of error around the parameter estimates (std. error). As well, I'd like to use the model to then predict with newdata, and I'm having problems (errors) with a couple steps in this process.

Here's the data:

table<- "ac.temp performance
1      2.17   47.357923
4      2.17  234.255317
7      2.17  138.002633
10     2.17  227.545902
13     2.17   28.118072
16     9.95  175.638448
19     9.95  167.392218
22     9.95  118.162747
25     9.95  102.770622
28     9.95  191.874867
31    16.07  206.897159
34    16.07   74.741619
37    16.07  127.219884
40    16.07  208.231559
42    16.07   89.544612
44    20.14  314.946107
47    20.14  290.994063
50    20.14  243.322497
53    20.14  192.335133
56    20.14  133.841776
58    23.83  139.746673
61    23.83  224.135993
64    23.83  126.726493
67    23.83  246.443386
70    23.83  163.019896
83    28.04    4.614154
84    28.04    2.851866
85    28.04    2.935584
86    28.04  153.868415
87    28.04  103.884295
88    30.60    0.000000
89    29.60    0.000000
90    30.30    0.000000
91    29.90    0.000000
92    30.80    0.000000
93    28.90    0.000000
94    30.00    0.000000
95    30.20    0.000000
96    30.40    0.000000
97    30.70    0.000000
98    27.90    0.000000
99    28.60    0.000000
100   28.60    0.000000
101   30.40    0.000000
102   30.60    0.000000
103   29.70    0.000000
104   30.50    0.000000
105   29.30    0.000000
106   28.90    0.000000
107   30.40    0.000000
108   30.20    0.000000
109   30.10    0.000000
110   29.50    0.000000
111   31.00    0.000000
112   30.60    0.000000
113   30.90    0.000000
114   31.10    0.000000"

perfdat<- read.table(text=table, header=T)

First I have to set a couple of the fixed parameters for my non-linear model on performance of an animal with respect to temperature

pi = mean(subset(perfdat, ac.temp<5)$performance)
ti = min(perfdat$ac.temp)

define the x variable (temperature)

t = perfdat$ac.temp

create a function for non-linear model

tpc = function(tm, topt, pmax) {
    perfmu = pi+(pmax-pi)*(1+((topt-t)/(topt-tm)))*(((t-ti)/(topt-ti))^((tm-ti)/(topt-tm)))
    perfmu[perfmu<0] = 0.00001
    return(perfmu)
}

create the negative log likelihood function

LL1 = function(tm, topt, pmax, performance=perfdat$performance) {
    perfmu = tpc(tm=tm, topt=topt, pmax=pmax)
    loglike = -sum(dnorm(x=performance, mean=perfmu, log=TRUE))
    return(loglike)
}

model performance using mle2 - maximum likelihood estimator

m1<- mle2(minuslogl = LL1, start=list(tm=15, topt=20, pmax=150), control=list(maxit=5000))

summary(m1)

This gives me parameter estimates but not estimates of error (std. error) with the warning message: In sqrt(diag(object@vcov)) : NaNs produced. However, the parameters estimates are good and get me predictions that make sense.

Plot of non-linear curve using parameter estimates

I have tried many different optimizers and methods and get the same error about not being able to calculate std. error, usually with warnings about not being able to invert the hessian. OR I get really wonky estimates of my parameters that don't make sense.

IF I use:

confint(m1) 

I get 95% intervals for each of my parameters, but I can't incorporate those into a prediction method that I could use for making a graph like below, which I made using an nls model and predict():

non-linear model with error graphed

If I recreate my mle2() model by embedding the model formula into the mle2 function:

tpcfun<- function(t, tm.f, topt.f, pmax.f) {
    perfmu = pi+(pmax.f-pi)*(1+((topt.f-t)/(topt.f-tm)))*(((t-ti)/(topt.f-ti))^((tm.f-ti)/(topt.f-tm.f)))
    perfmu[perfmu<0] = 0.00001
    return(perfmu)
}

m2<- mle2(performance ~ dnorm(mean=-sum(log(tpcfun(t=ac.temp, tm.f, topt.f, pmax.f))), sd=1),  method='L-BFGS-B', lower=c(tm.f=1, topt.f=1, pmax.f=1), control=list(maxit=5000, parscale=c(tm.f=1e1, topt.f=1e1, pmax.f=1e2)), start=list(tm.f=15, topt.f=20, pmax.f=150), data=perfdat)

summary(m2)

I get non-sensical estimates for my parameters and I still don't get estimates for error.

My question is, can anyone see anything wrong with either of my models (the model functions and likelihood functions) or anything else that I am doing wrong? I have a feeling that I may be writing the likelihood function wrong but I've tried all sorts of distributions and different ways but I may be totally screwing it up.

OR is there a way that I can get estimates of error around my parameters such that I can use them to visualize error around my model prediction in graphs?

Thanks,

Rylee

PS. I decided to make a graph with just point estimates and then the trend line without error around it, but I wanted to put the bars for 95% CI on each of the point estimates, but confint() is giving me redicylously small CI's which don't even show up on the graph because they are smaller than the point character I'm using, ha.

1 Answers1

0

I think the problem is in the "maxint" arg. Try to use good start values and avoid high interations. The second problem is the algorithm "L-BFGS-B" unless the default. When we use confint function it's normal obtain the intervals whether the mle2 otimization converged. Check if the profiles can be made as a plot (plotprofmle function). It's more safe. It's normal a "NaNs produced" error if your data contain zero values when apply log. I suggest use this sequence:

loglike = -sum(dnorm(x=performance, mean=perfmu, log=TRUE), na.rm=TRUE)

Verify if the result is plausible.