5

Is there a way to obtain loss value at each iteration while training a logistic regression?

Python sklearn show loss values during training has an working example for SGDRegressor however not working for logistic regression.

haneulkim
  • 4,406
  • 9
  • 38
  • 80

2 Answers2

2

I think you should change the parameter verbose or remove it. It works for me when you remove it, by default, "verbose=0".

old_stdout = sys.stdout
sys.stdout = mystdout = StringIO()
clf = LogisticRegression()
clf.fit(X_tr, y_tr)
sys.stdout = old_stdout
loss_history = mystdout.getvalue()
loss_list = []
for line in loss_history.split('\n'):
    if(len(line.split("loss: ")) == 1):
        continue
    loss_list.append(float(line.split("loss: ")[-1]))
plt.figure()
plt.plot(np.arange(len(loss_list)), loss_list)
plt.savefig("warmstart_plots/pure_LogRes:"+".png")
plt.xlabel("Time in epochs")
plt.ylabel("Loss")
plt.close()
Eddy Piedad
  • 336
  • 1
  • 8
  • 1
    Did this work for anyone? Didn't work for me in a Jupyter notebook. Also, what is the point about using `**params` here instead of `**kwargs`? There is no mention of those variables, so they could be literally anything. The only parameter that I could see being relevant here is `verbose=1` and that's not even mentioned. Wild that scikit-learn doesn't have functionality for this natively. – marvin Oct 07 '22 at 23:44
  • I rerun the code and you're right, verbose is the only relevant. Updating the code. – Eddy Piedad Oct 29 '22 at 01:23
0

As mentioned in the comments, the workaround using sys.stdout unfortunately does not work for the LogisticRegression() class, although it does for SGDClassifier().

I did get this to work by running the Python file from the terminal and piping the output to a file directly:

python3 logreg_train.py > terminal_output.txt

Then one can parse the output to extract the change in training loss.