3

I'm trying to code an optimization problem in Julia using JuMP. The objective function has two matrix multiplications.
First, multiply the vector of w with size (10) by the matrix of arr_C with size (20, 10). So the w should be transposed to size (1, 10) to perform matrix multiplication.
Second, multiply the vector of w_each_sim with size (20) by the result of the first multiplication, which is also a vector of size (20). So the multiplication should be like (1x20) x (20x1) to achieve a scalar. Please read until the last line of the question because I applied updates according to suggestions. My first try is as follows:

julia> using JuMP, Ipopt

julia> a = rand(20, 10);

julia> b = rand(20); b = b./sum(b)

julia> function port_opt(
            n_assets::Int8,
            arr_C::Matrix{Float64},
            w_each_sim::Vector{Float64})
        """
        Calculate weight of each asset through optimization

        Parameters
        ----------
            n_assets::Int8 - number of assets
            arr_C::Matrix{Float64} - array of C
            w_each_sim::Vector{Float64} - weights of each similar TW

        Returns
        -------
            w_opt::Vector{Float64} - weights of each asset
        """

        model = Model(Ipopt.Optimizer)
        @variable(model, 0<= w[1:n_assets] <=1)
        @NLconstraint(model, sum([w[i] for i in 1:n_assets]) == 1)
        @NLobjective(model, Max, w_each_sim * log10.([w[i]*arr_C[i] for i in 1:n_assets]))
        optimize!(model)

        @show value.(w)
        return value.(w)
    end


julia> port_opt(Int8(10), a, b)
ERROR: UndefVarError: i not defined
Stacktrace:
 [1] macro expansion
   @ C:\Users\JL\.julia\packages\JuMP\Z1pVn\src\macros.jl:1834 [inlined]
 [2] port_opt(n_assets::Int8, arr_C::Matrix{Float64}, w_each_sim::Vector{Float64})
   @ Main e:\MyWork\paperbase.jl:237
 [3] top-level scope
   @ REPL[4]:1

The problem is with the @NLconstraint line. How can I make this code work and get the optimization done?

Aditional tests

As @Shayan suggested, I rectified the objective function as follows:

function Obj(w, arr_C, w_each_sim)

    first_expr = w'*arr_C'
    second_expr = map(first_expr) do x
        log10(x)
    end

    return w_each_sim * second_expr
end

function port_opt(
        n_assets::Int8,
        arr_C::Matrix{Float64},
        w_each_sim::Vector{Float64})
    

    ....
    ....
    @NLconstraint(model, sum(w[i] for i in 1:n_assets) == 1)
    @NLobjective(model, Max, Obj(w, arr_C, w_each_sim))
    optimize!(model)

    @show value.(w)
    return value.(w)
end

a, b = rand(20, 10), rand(20); b = b./sum(b);
port_opt(Int8(10), a, b)

# Threw this:
ERROR: Unexpected array VariableRef[w[1], w[2], w[3], w[4], w[5], w[6], w[7], w[8], w[9], w[10]] in nonlinear expression. Nonlinear expressions may contain only scalar expressions.

Now, based on @PrzemyslawSzufel's suggestions, I tried this:

function Obj(w, arr_C::Matrix{T}, w_each_sim::Vector{T}) where {T<:Real}

    first_expr = zeros(T, length(w_each_sim))
    for i∈size(w_each_sim, 1), j∈eachindex(w)
        first_expr[i] += w[j]*arr_C[i, j]
    end

    second_expr = map(first_expr) do x
        log(x)
    end

    res = 0
    for idx∈eachindex(w_each_sim)
        res += w_each_sim[idx]*second_expr[idx]
    end

    return res
end

function port_opt(
        n_assets::Int8,
        arr_C::Matrix{Float64},
        w_each_sim::Vector{Float64})

    model = Model()
    @variable(model, 0<= w[1:n_assets] <=1)
    @NLconstraint(model, +(w...) == 1)
    register(model, :Obj, Int64(n_assets), Obj, autodiff=true)
    @NLobjective(model, Max, Obj(w, arr_C, w_each_sim))
    optimize!(model)

    @show value.(w)
    return value.(w)
end

a, b = rand(20, 10), rand(20); b = b./sum(b);
port_opt(Int8(10), a, b)

# threw this
ERROR: Unable to register the function :Obj because it does not support differentiation via ForwardDiff.

Common reasons for this include:

 * the function assumes `Float64` will be passed as input, it must work for any
   generic `Real` type.
 * the function allocates temporary storage using `zeros(3)` or similar. This
   defaults to `Float64`, so use `zeros(T, 3)` instead.
Shayan
  • 5,165
  • 4
  • 16
  • 45
  • 3
    Are you sure about `[w[i]*arr_C[i] for i in 1:n_assets]` in the objective function? It seems you want to apply matrix multiplication between `w` and `arr_C`, but the code you wrote doesn't achieve it. I guess it's better to define an objective `function`. What if you try `log10.(w'*arr_C')` ? – Shayan Nov 03 '22 at 12:58
  • @Shayan, Hi! Yes exactly. I tried your suggestion; It threw: `ERROR: log10 is not defined for type GenericAffExpr. Are you trying to build a nonlinear problem? Make sure you use @NLconstraint/@NLobjective.` – Shayan Nov 03 '22 at 13:03

2 Answers2

1

You need to reformulate the constraint either as:

@constraint(model, sum([w[i] for i in 1:n_assets]) == 1)

or

@NLconstraint(model, +(w...) == 1)

Regarding the objective as pointed out in the comments something is wrong with vector multiplication. What you want to have is most likely (and this works and the model gets solved):

@NLobjective(model, Max,sum(w_each_sim[i]*log(w[i]*arr_C[i]) for i in 1:n_assets))

EDIT:

further on the same question: https://discourse.julialang.org/t/optimization-using-jump/89720

Przemyslaw Szufel
  • 40,002
  • 3
  • 32
  • 62
  • Thanks! To ensure that this is exactly what should be formulated, the result of `w*arr_C` multiplication should be a vector since `w` is a vector and `arr_C` is a matrix. Then the result of `w_each_sim*` should be a scalar. But you wrote `w[i]*arr_C[i])`, which isn't a matrix multiplication. Am I right? – Shayan Nov 03 '22 at 13:42
  • This `w_each_sim[i]*log(w[i]*arr_C[i]) for i in 1:n_assets)` isn't a matrix multiplication expression. – Shayan Nov 03 '22 at 13:45
  • 1
    Oh, I somehow assumed `arr_C` is a vector - perhaps some editing is needed to match exactly what you are doing. You could extend the loop by introducing second variable – Przemyslaw Szufel Nov 03 '22 at 14:07
  • @PrzemyslawSzufel I updated the initial explanation and added my new try at the end. I will be appreciated it if you check it. Thanks. – Shayan Nov 03 '22 at 14:24
  • 2
    This cracks me up. Przemyslaw and I made the same mistake; https://discourse.julialang.org/t/optimization-using-jump/89720. – Oscar Dowson Nov 03 '22 at 20:55
  • 1
    The code has no input data so one looks at the variables and tries to make something up :) – Przemyslaw Szufel Nov 03 '22 at 22:26
1

This was asked on the JuMP community forum: https://discourse.julialang.org/t/optimization-using-jump/89720

Cross-posting my solution:

using JuMP, Ipopt
a = rand(20, 10)
b = rand(20); b = b./sum(b)

"""
Calculate weight of each asset through optimization

Parameters
----------
    n_assets::Int8 - number of assets
    arr_C::Matrix{Float64} - array of C
    w_each_sim::Vector{Float64} - weights of each similar TW

Returns
-------
    w_opt::Vector{Float64} - weights of each asset
"""
function port_opt(
    n_assets::Int8,
    arr_C::Matrix{Float64},
    w_each_sim::Vector{Float64},
)
    model = Model(Ipopt.Optimizer)
    @variable(model, 0<= w[1:n_assets] <=1)
    @NLconstraint(model, sum(w[i] for i in 1:n_assets) == 1)
    @NLobjective(
        model,
        Max, 
        sum(
            w_each_sim[i] * log10(sum(w[j] * arr_C[i, j] for j in 1:size(arr_C, 2)))
            for i in 1:length(w_each_sim)
        )
    )
    optimize!(model)
    return value.(w)
end

port_opt(Int8(10), a, b)
Oscar Dowson
  • 2,395
  • 1
  • 5
  • 13