4

I'm trying to use JuMP to solve a non-linear problem, where the number of variables are decided by the user - that is, not known at compile time.

To accomplish this, the @NLobjective line looks like this:

@eval @JuMP.NLobjective(m, Min, $(Expr(:call, :myf, [Expr(:ref, :x, i) for i=1:n]...)))

Where, for instance, if n=3, the compiler interprets the line as identical to:

@JuMP.NLobjective(m, Min, myf(x[1], x[2], x[3]))

The issue is that @eval works only in the global scope, and when contained in a function, an error is thrown.

My question is: how can I accomplish this same functionality -- getting @NLobjective to call myf with a variable number of x[1],...,x[n] arguments -- within the local, not-known-at-compilation scope of a function?

def testme(n)
    myf(a...) = sum(collect(a).^2)

    m = JuMP.Model(solver=Ipopt.IpoptSolver())

    JuMP.register(m, :myf, n, myf, autodiff=true)
    @JuMP.variable(m, x[1:n] >= 0.5)

    @eval @JuMP.NLobjective(m, Min, $(Expr(:call, :myf, [Expr(:ref, :x, i) for i=1:n]...)))
    JuMP.solve(m)
end

testme(3)

Thanks!

Daniel R. Livingston
  • 1,227
  • 14
  • 36

1 Answers1

5

As explained in http://jump.readthedocs.io/en/latest/nlp.html#raw-expression-input , objective functions can be given without the macro. The relevant expression:

    JuMP.setNLobjective(m, :Min, Expr(:call, :myf, [x[i] for i=1:n]...))

is even simpler than the @eval based one and works in the function. The code is:

using JuMP, Ipopt

function testme(n)
    myf(a...) = sum(collect(a).^2)

    m = JuMP.Model(solver=Ipopt.IpoptSolver())

    JuMP.register(m, :myf, n, myf, autodiff=true)
    @JuMP.variable(m, x[1:n] >= 0.5)

    JuMP.setNLobjective(m, :Min, Expr(:call, :myf, [x[i] for i=1:n]...))
    JuMP.solve(m)
    return [getvalue(x[i]) for i=1:n]
end

testme(3)

and it returns:

julia> testme(3)

:

 EXIT: Optimal Solution Found.
3-element Array{Float64,1}:
 0.5
 0.5
 0.5
Dan Getz
  • 17,002
  • 2
  • 23
  • 41
  • 2
    That's a nice trick, hadn't even thought of it myself. For anyone reading this, I would caution against using `autodiff=true` with high-dimensional input functions. The current implementation uses forward-mode AD which does not scale well as the input dimension increases. – mlubin Jun 23 '17 at 16:12
  • @mlubin What are the alternatives for `autodiff`? – Hans W. Oct 08 '17 at 16:33
  • [ReverseDiff.jl](https://github.com/JuliaDiff/ReverseDiff.jl) is a good reverse-mode implementation that you can use to provide a gradient function to JuMP. – mlubin Oct 09 '17 at 01:28
  • 1
    If `x` is of length `n`, you could replace `[x[i] for i=1:n]...` with `x`. @mlubin Perhaps this example should be in the docs? It took me a while to find it. – a06e Nov 05 '17 at 03:17
  • Suggested edit queue is full, but some syntax here is out-of-date. [Here](https://jump.dev/JuMP.jl/stable/manual/nlp/#Raw-expression-input) is the current working docs link. – M. Thompson Jun 29 '21 at 20:20