Background and question
There are multiple functions for assessing the stability and accuracy of psychological network model parameteres in the bootnet
package, such as bootstraps for edge weight confidence intervals, case-drop boostraps for centrality measures, and bootstrap difference tests for edge weights. These approaches take a network object from the estimateNetwork()
function. However, there are cases when you are estimating models with the psychonetrics
package which produces objects that seemingly can't be fed into bootnet package functions. This includes cases when you have estimated a saturated network model with model search algorithms (pruning, stepup).
How do you assess the stability in these kinds of network models? Or is it not conceptually meaningfull to assess the same kind of stability measures?
Specific case
I am estimating a network model based on cross-sectional data which is coded binary. I have a relatively large sample size of ~3000, so I follow recommendations by Isvoranu & Epskamp (2021) to use an unregularized estimator. I also want to prioritize specificity so I apply pruning. My model:
model <- Ising(binaryDF) %>%
runmodel() %>%
prune(alpha = 0.01, adjust = "bonferroni") %>%
stepup(alpha = 0.01, criterion = "bic")
After assessing the fit of my model I now want to assess the stability and accuracy, but I can't use the bootnet package for this. The textbook "Network Psychometrics with R" mentions that confidence intervals for edges in saturated models can be drawn with the CIplot function in psychonetrics. However, these CIs are only valid if no model search algorithm was used (such as pruning and stepup in this case).
You can also inspect the parameters for a psychonetrics model with psychonetrics::parameters(model)
, here you will find p-values and SEs for each edge weight coefficient. I guess you could compute CIs then for each coefficient, but is this valid?