1

My question is: Can one use a JuMP optimization problem inside a Turing model?

Below is a minimal example of the functionality that I'm trying to obtain:

using Turing
using JuMP
import Ipopt

function find_u(θ)
    # Describe the model
    model = Model(optimizer_with_attributes(Ipopt.Optimizer))

    set_silent(model)
    @variables(model, begin
        0 <= u <= 10
    end)
    @NLobjective(
        model,
        Max,
        θ * u - u^2
        )
    # optimize
    JuMP.optimize!(model)

    # return optimum value
    return value(u)
end

@model function turing_model(y)
    # fixed parameters
    σ = 1e-3

    # prior
    θ ~ Uniform(0, 2)

    # Model
    for i = 1 : length(y)
        y[i] ~ Normal(find_u(θ), σ)
    end

end

# Simulated data
θs = 1
y = [find_u(θs) + rand(Normal(0, 1e-3)) for i = 1: 10]

# inference 
numberofsamples = 500
m_train = turing_model(y);
posterior_chain = sample(m_train, NUTS(100, 0.65), numberofsamples)

The error I get when running this code is MethodError: no method matching Float64(::Dual{ForwardDiff.Tag{Turing.TuringTag, Float64}, Float64, 1})

Because of similar errors, it seems this error comes because the varible types are not correctly propagated during the autodifferentiation.

Do someone have any suggestions?

Thanks!

I have tried to make the find_u(θ) function to be compatible with the ForwardDiff package, testing for example if ForwardDiff.gradient(find_u, xin) with an input xin = 1 + Dual(0, 1) gives no error.

But I have not found a way to make it work.

1 Answers1

0

The simplest answer is that you cannot use JuMP in a Turing model because JuMP models are not automatically differentiable.

I don't know if Turing lets you provide explicit gradients, but you'd need a way of computing the derivative of the optimal solution of u with respect to θ, and that might be quite tricky for an arbitrary JuMP model.

Oscar Dowson
  • 2,395
  • 1
  • 5
  • 13
  • Thanks for your answer. It seems that some JuMP models can be differentiated using DiffOpt: https://github.com/jump-dev/DiffOpt.jl Is it possible to feed Turing the gradients calculated using this package? Thanks! – Marco Tulio Angulo Mar 02 '23 at 18:20
  • DiffOpt works for conic models, not arbitrary nonlinear ones. And I'm not sure if Turing lets you provide explict gradients. You might get more support if you post on the Julia discourse channel for probabilistic programming: https://discourse.julialang.org/c/domain/probprog/48 – Oscar Dowson Mar 02 '23 at 23:26