I'm following this article to write an anomaly detection program, and my stepwise model seems to be generating LinAlgError: Schur decomposition solver error.
I've seen people have the same issue with the ARIMA function, and have tried adding the parameters enforce_stationarity=False, enforce_invertibility=False
but the problem persists.
from pmdarima.arima import auto_arima
import math
import statsmodels.api as sm
import statsmodels.tsa.api as smt
from sklearn.metrics import mean_squared_error
stepwise_model = auto_arima(train_log, start_p=1, start_q=1,
max_p=3, max_q=3, m=7,
start_P=0, seasonal=True,
d=1, D=1, trace=True,
error_action='ignore',
suppress_warnings=True,
stepwise=True)
train, test = actual_vals[0:-70], actual_vals[-70:]
train_log, test_log = np.log10(train), np.log10(test)
# split data into train and test-sets
history = [x for x in train_log]
predictions = list()
predict_log=list()
for t in range(len(test_log)):
stepwise_model.fit(history)
output = stepwise_model.predict(n_periods=1)
predict_log.append(output[0])
yhat = 10**output[0]
predictions.append(yhat)
obs = test_log[t]
history.append(obs)
# plot
figsize=(12, 7)
plt.figure(figsize=figsize)
pyplot.plot(test,label='Actuals')
pyplot.plot(predictions, color='red',label='Predicted')
pyplot.legend(loc='upper right')
pyplot.show()
You should be able to replicate the error by replacing actual_vals with the value column of a time series data frame? But if not, my data is here. You can read it with df=pd.read_csv('/Users/main/Downloads/Stage.csv', header=14,low_memory=False)
. Although I encountered some errors working with the csv data, so you might have to read it from an sql database.