0

I am using the TFTModel. After training (and validating) using the fit method, I would like to predict all data points in the train, test and validation set using the already trained model.

Currently, there are only the methods:

  • historical_forcast: supports predicting for multiple time steps (with corresponding look backs) but just one time series
  • predict: supports predicting for multiple time series but just for n next time steps.

What I am looking for is a method like historical_forcast but where series, past_covariates, and future_covariates are supported for being predicted without retraining. My best attempt so far is to run the following code block on an already trained model:

predictions = []
for s, past_cov, future_cov in zip(series, past_covariates, future_covariates): 
      predictions.append(model.historical_forecasts(
              s,
              past_covariates=past_cov,
              future_covariates=future_cov,
              retrain=False,
              start=model.input_chunk_length,
              verbose=True
          ))

Where series, past_covariates, and future_covariates are lists of target time series and covariates respectively, each consisting of the concatenated train, val and test series which I split afterwards again to ensure the availability of the past values needed for predicting at the beginning of test and val.

My objection / question about this: is there a more efficient way to do this through better batching with the current interface, our would I have to call the torch model my self?

0 Answers0