2

I have a question about the coefficient in Multilayer Perceptron (MLP).
I have a sample code and data as follow:

**T,x1,x2,x3,x4,x5,x6**
2.1186,-0.5449,-0.6028,-0.5626,-0.4564,-0.4607,-0.4648
-0.4109,-0.5015,-0.5595,-0.5191,-0.5360,-0.5033,-0.5346
-0.6174,-0.5775,-0.6028,-0.5626,-0.2720,-0.2601,-0.2742
2.0325,0.0740,0.0467,0.0887,-0.1686,-0.1603,-0.1704
0.2086,1.0944,1.0647,1.1088,2.0132,2.0126,2.0116
0.7248,-0.5720,-0.6136,-0.5897,-0.3825,-0.4096,-0.3851
0.5183,-0.4580,-0.5162,-0.4757,-0.5871,-0.6311,-0.6243
-0.6518,-0.4472,-0.4729,-0.4323,-0.3453,-0.3261,-0.3267
1.1205,1.2358,1.1724,1.2177,1.3076,1.3099,1.3092
-0.5658,-0.0897,-0.1465,-0.1075,-0.3556,-0.3419,-0.3647
1.3442,-0.3603,-0.3430,-0.3455,0.1415,0.1456,0.1473
1.3959,-0.1106,-0.0832,-0.1284,-0.4933,-0.5033,-0.4748
-1.0992,-0.2300,-0.2564,-0.2152,0.0435,0.0495,0.0474
-0.5658,-0.5449,-0.5595,-0.5626,-0.5700,-0.5459,-0.5346
0.0881,-0.3712,-0.4296,-0.3889,0.2139,0.2140,0.2113
1.6712,-0.0237,-0.0399,-0.0415,0.9342,0.9352,0.9316
-0.8067,-0.3510,-0.4049,-0.3687,-0.3675,-0.3567,-0.3763
-0.1356,3.2446,3.2939,3.2584,1.6339,1.6331,1.6319
0.1914,-0.2528,-0.2759,-0.2380,-0.5104,-0.5992,-0.5794
-1.0304,1.5722,1.5408,1.5865,-0.2839,-0.2888,-0.2848
-0.6174,-0.5883,-0.6028,-0.6060,-0.4763,-0.5033,-0.5047
-0.0323,-0.2409,-0.2564,-0.2586,-0.1259,-0.1256,-0.1327
0.1741,-0.5594,-0.5306,-0.5445,-0.4933,-0.4607,-0.5047
-0.1700,-0.4472,-0.4296,-0.4323,-0.3502,-0.3486,-0.3505

import pandas as pd
from sklearn import linear_model
from sklearn.metrics import mean_absolute_error
from sklearn.neural_network import MLPRegressor

def demo():
    df = pd.read_csv('sample.csv')

    df_train = df.head(17)
    x_train = df_train[['x1', 'x2', 'x3', 'x4', 'x5', 'x6']].values
    y_train = df_train['T'].values

    df_test = df.tail(7)
    x_test = df_test[['x1', 'x2', 'x3', 'x4', 'x5', 'x6']].values
    y_test = df_test['T']

    linear_regr = linear_model.LinearRegression(fit_intercept=False)
    linear_regr.fit(x_train, y_train)
    y_pred_mlr = linear_regr.predict(x_test)

    regrMLP = MLPRegressor(hidden_layer_sizes=(9, 3), activation='relu', solver='adam', alpha=0.01)
    regrMLP.fit(x_train, y_train)
    y_pred_mlp = regrMLP.predict(x_test)

    print(y_pred_mlr)
    print(y_pred_mlp)

    print('MAE MLR: %.4f' % mean_absolute_error(y_test, y_pred_mlr))
    print('MAE MLP: %.4f' % mean_absolute_error(y_test, y_pred_mlp))

    print("Coefficient MLR: ", linear_regr.coef_)
    print("Coefficient: ", 'HOW TO GET COEFFICIENT LIKE MLR???')
if __name__ == '__main__'
    demo()
MLR model Y = β0+ β1X1+..+ βnXn
We can determine the βi by using the coef_ property.
But, with MLP, I don't know how to get it
Please explain and help me this case and another model like GRNN or bagging-and-random-forest.
desertnaut
  • 57,590
  • 26
  • 140
  • 166

0 Answers0