0

I need to set the attribute activation_out = 'logistic' in a MLPRegressor of sklearn. It is supposed that this attribute can take the names of the relevant activation functions ('relu','logistic','tanh' etc). The problem is that I cannot find the way that you can control this attribute and set it to the preferred functions. Please, if someone has faced this problem before or knows something more, I want some help.

I have tried to set attribute to MLPRegressor(), error. I have tried with the method set_params(), error. I have tried manually to change it through Variable Explorer, error. Finally, I used MLPName.activation_out = 'logistic' but again when I used fit() method it changed to 'identity'. CODE:

X_train2, X_test2, y_train2,y_test2 = 
train_test_split(signals_final,masks,test_size=0.05,random_state = 
17)
scaler2 = MinMaxScaler()
X_train2 = scaler.fit_transform(X_train2)
X_test2 = scaler.transform(X_test2)

MatchingNetwork = MLPRegressor(alpha = 1e-15,hidden_layer_sizes= 
(300,)                          
,random_state=1,max_iter=20000,activation='logistic',batch_size=64)
MLPRegressor().out_activation_ = 'logistic'
  • pls post the code with the error trace. i was looking at the [source code](https://github.com/scikit-learn/scikit-learn/blob/7db5b6a98/sklearn/neural_network/_multilayer_perceptron.py#L1254), this is an attribute, not the parameter, so are you sure you are looking at right model? for example if you put `regr.out_activation_='logistic'` it will work, but if you put `regr.out_activation_='sigmoid'` will return a `keyerror` – simpleApp Jan 31 '23 at 13:11
  • sorry, I corrected the term sigmoid with logistic. I was using correctly logistic, it does not work – Evangelos Galaris Jan 31 '23 at 13:13
  • okay, I can see, after initializing you have set it to `MLPRegressor().out_activation_ = 'logistic'`, where are you noticed that it's back to `identity`,? post the code in your question section as it's easy to read and refer to in the future. thanks – simpleApp Jan 31 '23 at 13:26

1 Answers1

0

You cannot. The output activation is determined by the problem type at fit time. For regression, the identity activation is used; see the User Guide.

Here is the relevant bit of source code. You might be able to hack it by fitting one iteration, changing the attribute, then using partial_fit, since then this _initialize method won't be called again; but it's likely to break when back-propogating.

Generally I think the sklearn neural networks aren't designed to be super flexible: there are other packages that play that role, are more efficient (use GPUs), etc.

Ben Reiniger
  • 10,517
  • 3
  • 16
  • 29