I made a neural network to predict future prices of a stock using historical data for training. The data set is arranged in hourly intervals so the goal is to forecast the price at the end of the next hour of trading.
I used 2 hidden layers of (20,16) format, 26 inputs and one output which should be the price. The activation function is 'relu' and solver is 'adam.
When the training is done and i try to test it all the outputs are values between 0 and 1 which i dont understand.
Here's my code:
# Defining the target column and predictors
target_column = ['close']
predictors = list(set(list(df.columns))-set(target_column)
# Standardizing the DataFrame
df[predictors] = df[predictors]/df[predictors].max()
df.describe().transpose()
# Creating the sets and splitting data
X = df[predictors].values
y = df[target_column].values
y = y.astype('int')
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=1)
df.reset_index()
#Fitting the training data
mlp = MLPClassifier(hidden_layer_sizes=(20,16), activation='relu', solver='adam', max_iter=1000)
mlp.fit(X_train, y_train.ravel())
#Predicting training and test data
predict_train = mlp.predict(X_train)
predict_test = mlp.predict(X_test)
After i execute the code i also get this warning:
UndefinedMetricWarning: Recall and F-score are ill-defined and being set to 0.0 in labels with no true samples. `Use `zero_division` parameter to control this behavior.
Shouldn't the prediction at least be a continuous numerical variable not limited in between 1 and 0?