I have the probability density functions func1
and func2
(including the support
of each) of two random variables. Now I need the probability density function of the sum of these both random variables, which I create via:
import numpy as np
import scipy.integrate
[...]
def density_add(func1, func2, support):
return np.vectorize(lambda xi: scipy.integrate.simps(func1(support) * func2(xi-support), support))
The problem with that is the huge redundancy. Many values have to be calculated more than once. So I tried to cache but problems appeared due to the dynamically generated functions without unique names.
from joblib import Memory
mem = Memory(cachedir="/tmp/joblib", verbose=0)
[...]
def density_add(func1, func2, support):
return np.vectorize(mem.cache(lambda xi: scipy.integrate.simps(func1(support) * func2(xi-support), support))
/usr/lib/python3/dist-packages/numpy/lib/function_base.py:2232: JobLibCollisionWarning: Cannot detect name collisions for function '<lambda> [...]
/usr/lib/python3/dist-packages/numpy/lib/function_base.py:2232: JobLibCollisionWarning: Possible name collisions between functions '<lambda>' [...]
What is a better approach to cache such dynamically generated functions?