I have a Python function which has 2 inputs X0 and X1 and 1 output Y:
def mySimulator(x):
y0 = 1. - x[0] * x[1]
y = [y0]
return y
I would like to estimate the variance of the output Y from Taylor decomposition when for example X0~Normal(1,3) and X1~Normal(2,4).
This is easy with OpenTURNS:
import openturns as ot
g = ot.PythonFunction(2,1,mySimulator)
distX0 = ot.Normal(1.,3.)
distX1 = ot.Normal(2.,4.)
X = ot.ComposedDistribution([distX0,distX1])
XRV = ot.RandomVector(X)
Y = ot.CompositeRandomVector(g, XRV)
taylor = ot.TaylorExpansionMoments(Y)
sigma2 = taylor.getCovariance()
print(sigma2)
The previous script prints:
>>> print(sigma2)
[[ 52 ]]
The covariance from the Taylor expansion is based on the gradient of the function and the variance of the marginals. The issue is that the gradient of the function is here computed from finite differences. This is a pity, since the exact gradient is here straightforward to compute:
def myGradient(x):
dy0dx0 = -x[1]
dy0dx1 = -x[0]
gradient = [[dy0dx0],[dy0dx1]]
return gradient
However, I do not know how to define a PythonFunction
which has a user-defined gradient: is that possible?
I searched in the doc, and found the following useful pages, which do not lead me to the solution:
- http://openturns.github.io/openturns/latest/examples/reliability_sensitivity/central_tendency.html shows how to compute this, but from a symbolic function and I want to have a Python function.
- http://openturns.github.io/openturns/latest/examples/functional_modeling/python_function.html shows how to define a Python and compute the gradient, but I want to set a user-defined gradient, not use the finite difference default implementation.