3

Why the loss in this code is not equal to the mean squared error in the training data? It should be equal because I set alpha =0 , therefore there is no regularization.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_squared_error


#
i = 1 #difficult index

X_train = np.arange(-2,2,0.1/i).reshape(-1,1)
y_train = 1+ np.sin(i*np.pi*X_train/4)

fig = plt.figure(figsize=(8,8))
ax = fig.add_axes([0,0,1,1])
ax.plot(X_train,y_train,'b*-')
ax.set_xlabel('X_train')
ax.set_ylabel('y_train')
ax.set_title('Function')
nn = MLPRegressor(
    hidden_layer_sizes=(1,),  activation='tanh', solver='sgd', alpha=0.000, batch_size='auto',
    learning_rate='constant', learning_rate_init=0.01, power_t=0.5, max_iter=1000, shuffle=True,
    random_state=0, tol=0.0001, verbose=True, warm_start=False, momentum=0.0, nesterovs_momentum=False,
    early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08)

nn = nn.fit(X_train, y_train)

predict_train=nn.predict(X_train)



print('MSE training : {:.3f}'.format(mean_squared_error(y_train, predict_train)))

When I ran this code I found loss = 0.02061828 and the MSE in the training (MSE training) = 0.041

Jorge Amaral
  • 131
  • 2
  • Kudos for the reproducible example, but if (as it seems) you question has only to do with the *training* set loss, I kindly suggest you remove any reference to test sets (it is irrelevant and just creates unnecessary clutter) – desertnaut Mar 28 '19 at 11:43
  • 1
    I removed any reference to the test set as suggested. – Jorge Amaral Mar 29 '19 at 18:15
  • This may need to be asked on http://statsexchange.com/ or https://datascience.stackexchange.com/ as it does not involve issues of coding for this forum. – Parfait Mar 29 '19 at 19:04

0 Answers0