Yes, this is very annoying!
The default joblib
backend spawns additional processes, which do not seem to inherit the warning filters applied using warnings.filterwarnings
. However, you can use the PYTHONWARNINGS
environment variable to set warning filters; this will affect all newly-spawned processes, which inherit their environment variables from those in the main process.
From the relevant documentation page:
The warnings filter is initialized by -W
options passed to the Python interpreter command line and the PYTHONWARNINGS
environment variable. The interpreter saves the arguments for all supplied entries without interpretation in sys.warnoptions
; the warnings
module parses these when it is first imported (invalid options are ignored, after printing a message to sys.stderr
).
Individual warnings filters are specified as a sequence of fields separated by colons:
action:message:category:module:line
A separate page describes in more detail what each of these fields means, but basically:
action
describes what to do with the warning; for you, you want ignore
to suppress the message
message
can be a string that must match the beginning of the warning message in order for it to be filtered
category
is the warning class, e.g. FutureWarning
, DeprecationWarning
module
and line
refer to where the warning is raised
Any of these fields can be empty, and you can leave off the trailing semicolons.
So, to ignore all FutureWarning
s:
Within a Jupyter notebook, you can do something like
%env PYTHONWARNINGS=ignore::FutureWarning
Or in a script, add an entry to os.environ
:
import os
os.environ['PYTHONWARNINGS']='ignore::FutureWarning'
It seems there should probably be a way to set environment variables only for the spawned process, but I can't figure out if joblib
exposes an API for this.