0

so I wrote a minimum example to show what I'm trying to do. Basically I want to solve a optimization problem with multiple variables. When I try to do this in JuMP I was having issues with my function obj not being able to take a forwardDiff object.

I looked here: and it seemed to do with the function signature :Restricting function signatures while using ForwardDiff in Julia . I did this in my obj function, and for insurance did it in my sub-function as well, but I still get the error

 LoadError: MethodError: no method matching Float64(::ForwardDiff.Dual{ForwardDiff.Tag{JuMP.var"#110#112"{typeof(my_fun)},Float64},Float64,2})
Closest candidates are:
  Float64(::Real, ::RoundingMode) where T<:AbstractFloat at rounding.jl:200
  Float64(::T) where T<:Number at boot.jl:715
  Float64(::Int8) at float.jl:60

This still does not work. I feel like I have the bulk of the code correct, just some weird of type thing going on that I have to clear up so autodifferentiate works...

Any suggestions?

using JuMP
using Ipopt
using LinearAlgebra

function obj(x::Array{<:Real,1})
    println(x)
    x1 = x[1]
    x2 = x[2]
    eye= Matrix{Float64}(I, 4, 4)
    obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
   println(obj_val)
   return obj_val
end

function mat_fun(var::T) where {T<:Real}
    eye= Matrix{Float64}(I, 2, 2)
    eye[2,2]=var
    return eye

end

m = Model(Ipopt.Optimizer)

my_fun(x...) = obj(collect(x))

@variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
@NLobjective(m, Min, my_fun(x...))

optimize!(m)

# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))
Eigenvalue
  • 1,093
  • 1
  • 14
  • 35

2 Answers2

0

I found the problem: in my mat_fun the type of the return had to be "Real" in order for it to propgate through. Before it was Float64, which was not consistent with the fact I guess all types have to be Real with the autodifferentiate. Even though a Float64 is clearly Real, it looks like the inheritence isn't perserved i.e you have to make sure everything that is returned and inputed are type Real.

using JuMP
using Ipopt
using LinearAlgebra

function obj(x::AbstractVector{T}) where {T<:Real}
    println(x)
    x1 = x[1]
    x2 = x[2]
    eye= Matrix{Float64}(I, 4, 4)
    obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
   #println(obj_val)
   return obj_val
end

function mat_fun(var::T) where {T<:Real}
    eye= zeros(Real,(2,2))
    eye[2,2]=var
    return eye

end

m = Model(Ipopt.Optimizer)

my_fun(x...) = obj(collect(x))

@variable(m, 0<=x[1:2]<=2.0*pi)
register(m, :my_fun, 2, my_fun; autodiff = true)
@NLobjective(m, Min, my_fun(x...))

optimize!(m)

# retrieve the objective value, corresponding x values and the status
println(JuMP.value.(x))
println(JuMP.objective_value(m))
println(JuMP.termination_status(m))
Eigenvalue
  • 1,093
  • 1
  • 14
  • 35
0

Use instead

function obj(x::Vector{T}) where {T}
    println(x)
    x1 = x[1]
    x2 = x[2]
    eye= Matrix{T}(I, 4, 4)
    obj_val = tr(eye-kron(mat_fun(x1),mat_fun(x2)))
   println(obj_val)
   return obj_val
end

function mat_fun(var::T) where {T}
    eye= Matrix{T}(I, 2, 2)
    eye[2,2]=var
    return eye
end

Essentially, anywhere you see Float64, replace it by the type in the incoming argument.

Oscar Dowson
  • 2,395
  • 1
  • 5
  • 13