5

I am using F1_score metrics in sklearn. For some training data sets, the total number of y=1(rare case) sets is zero, the F1_score is zero,which is normal. But the sklearn gives the following warning:

"UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 due to no predicted samples".

Does anyone know how to silence this warning? and in general could we silence all kinds of warnings in sklearn ?

saunter
  • 73
  • 1
  • 7
  • The accepted answer here seems to have the information you are interested in: https://stackoverflow.com/questions/43162506/undefinedmetricwarning-f-score-is-ill-defined-and-being-set-to-0-0-in-labels-wi – Jason Baumgartner Dec 29 '18 at 08:40
  • Possible duplicate of [UndefinedMetricWarning: F-score is ill-defined and being set to 0.0 in labels with no predicted samples](https://stackoverflow.com/questions/43162506/undefinedmetricwarning-f-score-is-ill-defined-and-being-set-to-0-0-in-labels-wi) – Venkatachalam Dec 29 '18 at 16:07
  • I checked all, but not able to find the one I wanted – saunter Jan 02 '19 at 02:48

2 Answers2

16

You can easily ignore the warnings with the help of warnings module in Python like this.

import warnings
warnings.filterwarnings('ignore') 

For example,

from sklearn.metrics import f1_score

yhat = [0] * 100
y = [0] * 90 + [1] * 10

print(f1_score(y, yhat))

This will throw warning. To avoid that use,

from sklearn.metrics import f1_score

import warnings
warnings.filterwarnings('ignore') 

yhat = [0] * 100
y = [0] * 90 + [1] * 10

print(f1_score(y, yhat))

This wont show the warning.

Sreeram TP
  • 11,346
  • 7
  • 54
  • 108
  • 2
    Thanks for your info. Actually I have the line "warnings.filterwarnings('ignore')" at the beginning of my code, but still the warning pops out. The reason I guess this "ignore" line only controls warnings of python modules not sklearn ones. In your example, you probably need to exchange y and yhat in the f1_score function to see the effect since the first argument is the ytrue and the second is ypred – saunter Jan 02 '19 at 02:47
  • 1
    @saunter, I have tried both ways. I am not getting any warning in my Machine. Without `warnings.filterwarnings('ignore')` I am getting the warning, but with that code I am not getting any warnings. – Sreeram TP Jan 02 '19 at 09:38
  • Nice to hear it helped. – Sreeram TP Jan 03 '19 at 05:08
  • 2
    I face the same issue as @saunter mentioned – yash Feb 07 '21 at 11:10
  • It does not work for me. – Antonio Sesto Nov 20 '22 at 21:22
0

I add my experience here because some users seems not to be able to silence the warnings even if they use the warnings.filterwarnings('ignore') solution suggested by @sreeram-tp.

If this is your case and you are executing a GridSearchCV or RandomizedSearchCV with parallel jobs (i.e. n_jobs paramether not equal to 1), this happens because the jobs spawned by joblib do not inherit the same warning filter set in the current job. For more information consult the answer by @caseygrun.

In my case to fix this behavior I had to apply both the solutions:

# Removes warnings in the current job
warnings.filterwarnings("ignore")
# Removes warnings in the spawned jobs
os.environ['PYTHONWARNINGS']='ignore'

Please, note that setting only the os environmental variable is not enough. Both the commands are needed.

EDIT: I know that the case of the question is related to the f1_score metric, but unfortunately other threads that may be more appropriate are closed. Since I do not have enough points to comment I have to add this answer under a similar question.