I have been trying for a while to generate confidence intervals around each of my predicted points that my models generate. This is driven by the fact that I create annual estimates by summing estimates for short intervals of time, and to generate upper and lower error for these annual estimates, I need to be able to generate an upper and lower bound for each estimate and sum across each of those estimates to generate the accurate upper and lower annual estimates that my model generates. I have tried predictNLS() but it works remarkably slowly (up to 30 sec per estimate) and I am working with over 7 million modelled values, and it also doesn't interact well with some of the other functions I am using. Below is an example of what I've built so far:
library(tidyr)
#Pulling in data:
data_ex <- DNAse
#Building the non linear model:
model_ex <- data_ex %>%
group_by(Run) %>%
do(model = nls(density ~ a * exp(b * conc),
start = list(a = 0.8, b = 0.1), data = .)) %>% ungroup()
#Extracting some parameters:
param_model_ex <- model_ex %>%
mutate(param = lapply(model, broom::tidy)) %>%
unnest(param)
Is there some way to work with the data generated by this model to produce these values accurately? Or is there another package that can take the information that this model generates and estimate upper and lower confidence/prediction intervals efficiently? For every predicted value generated from this model, i.e. every row of a new dataset generated using predict() or any variety of other prediction methods, I would like to generate a column that has an upper and lower estimate.