1

While I was using nlsLM from minpack.lm for a while with great results, I am now facing unexpected trouble in terms of fitting quality with less datapoints supplied.

In the example you see one fitting with 750 datapoints and one with around 50.

http://i66.tinypic.com/33cn9jq.jpg

http://i64.tinypic.com/14cb4a0.jpg

The is a mono-exponential fitting B(1-exp(-x*tm))+C where C and B are pre-defined and the tm is the time.

Sample data:

mxf <- structure(list(tm = c(0.604, 0.705, 0.805, 0.905, 1.005, 1.105, 
1.205, 1.305, 1.405, 1.505, 1.605, 1.705, 1.805, 1.905, 2.005, 
2.104, 2.204, 2.304, 2.405, 2.505, 2.605, 2.705, 2.805, 2.905, 
3.005, 3.105, 3.205, 3.305, 3.405, 3.505, 3.605, 3.705, 3.805, 
3.905, 4.005, 4.105, 4.205, 4.305, 4.405, 4.505, 4.605, 4.705, 
4.804, 4.904, 5.004), mxxp1m = c(15.2, 24.5, 30.1, 35.3, 38.6, 
40.9, 42.7, 46.3, 47.1, 47.8, 48, 48.6, 51.1, 51.7, 52.6, 52.3, 
52.2, 51.8, 54.4, 54, 52.7, 51.7, 54.4, 52.5, 53.5, 52.8, 54, 
53.5, 52.5, 53.4, 52, 52.9, 52.7, 52.4, 53.7, 52.3, 53.1, 52.2, 
52.8, 53.1, 52.9, 53, 53.3, 51, 52.5)), .Names = c("tm", "mxxp1n"
), class = "data.frame", row.names = c("1", "2", "3", "4", "5", 
"6", "7", "8", "9", "10", "11", "12", "13", "14", "15", "16", 
"17", "18", "19", "20", "21", "22", "23", "24", "25", "26", "27", 
"28", "29", "30", "31", "32", "33", "34", "35", "36", "37", "38", 
"39", "40", "41", "42", "43", "44", "45"))

This is the code I used in both samples:

    mxf = data.frame(mxf,b,c)
    ## c = 0.2 (y starting point)
    ## d = 0.6 (y - span from start to end)

    fit = nlsLM(mxxp1m ~ b-(b*exp(-x*tm))+c, data = mxf, start = list(x=0.5))

I already thought of weighting the values, or change maxiter options but did not got sufficiently better results.

lmo
  • 37,904
  • 9
  • 56
  • 69
Robin K
  • 68
  • 6
  • 1
    Hi, welcome to SO. It would be really helpful if you could provide a minimal, reproducible example ([see here](http://stackoverflow.com/help/mcve)). This can be done with `dput(head(tm))` or providing some fake data, and in addition show us what the parameters are set to (i.e. `b <- 5` or something). – slamballais Feb 02 '16 at 20:21
  • 1
    @Laterow: I updated the code section, and uploaded a .csv datatable with the non-normlized but otherwise same dataset as the sample above: http://we.tl/ihv1jPLH1r – Robin K Feb 02 '16 at 20:52
  • Thanks, I put the data in the opening post so that other users can access it directly. Although `b` and `c` are still missing, I get the gist of it. Since I personally have no experience with `nlsLM` and since I can't find anything via `?nlsLM` either, I can't really help you. Lets hope someone else responds. – slamballais Feb 02 '16 at 22:20

1 Answers1

0

Finally find the solution on my own:

The Problem is that the nls fitting algorithm cannot handle x-start values (here on the time frame) > 0. If the datapoints start at t = 0.6 the fitting is just not correctly working. If it is then set to t = t-t[1] its working just fine.

Therefore always modify your t vector to start with t = 0 and not something like t = 0.603.

Robin K
  • 68
  • 6