I am building datasets and training unique models for combinations of x1, x2, x3
. Think:
prophet1 <- fit.prophet(data.frame(ds, y, x1))
prophet2 <- fit.prophet(data.frame(ds, y, x2, x3))
prophet3 <- fit.prophet(data.frame(ds, y, x3))
I am then setting x1, x2, x3 to zero for each of the models, and evaluating its effect on y had that variable not been introduced. My question is- there any way to tell from the model object whether x1 in prophet1 contributed more than x2+x3 in prophet2 without explictly predicting the dataframe? i.e.- can we tell whether setting x1 to zero changes y more than x2+x3 to zero does by just looking at the generated model? Does x1 have a higher regression coefficient than x2+x3 and as such- change y more?
I was digging around and found this:
model$param$k; // Base trend growth rate
model$param$m; // Trend offset
model$param$sigma_obs; // Observation noise
model$param$beta; // Regressor coefficients
Source: https://github.com/facebook/prophet/issues/501
If I were to place x1, x2, and x3 in the same dataframe and evaluate y, I can evaluate this coefficient by looking at the beta values. However- I don't know how to find this out if they are in seperate dataframes across different models.
But plotting the sum(beta), k, m, or sigma_obs against difference between y and predictions had the variable set to zero
did not yield me any relationship at all. Is it possible to extract out how important the variables used to model y from a prophet model are/ whether Prophet believes the effect is positive/negative? If so; how can I do so?