1

I try to solve a non-linear optimization problem using the function donlp2 in R. My goal is to find out the maximum value of the following function:

442.8658*(x1+1)^(0.008752747)*(y1+1)^(0.555782)+(x2+1)^(0.008752747)*(y2+1)^(0.555782)

There is no non-linear constraints. The linear constraints are listed below:

x1+x2<=20000;
y1+y2<=20000;
x1<=4662.41;
x2<=149339;
y1<=14013.94;
y2<=1342738;
x1>=0;
x2>=0;
y1>=0;
y2>=0;

Below is my code:

p     <- c(rep(0,4))
par.l <- c(rep(0,4))
par.u <- c(4662.41, 149339, 14013.94, 1342738)
fn <- function(par){
  x1 <- par[1]; y1<-par[3]
  x2 <- par[2]; y2<-par[4] 
  y  <- 1 / (442.8658*(x1+1)^(0.008752747)*(y1+1)^(0.555782)
               + (x2+1)^(0.008752747)*(y2+1)^(0.555782))
}
A <- matrix(c(rep(c(1,0),2), rep(c(0,1),2)), nrow=2)
lin.l <- c(-Inf, 20000)
lin.u <- c(-Inf, 20000)
ret <- donlp2(p, fn, par.u=par.u, par.l=par.l, A=A, lin.l=lin.l, lin.u=lin.u)

I searched and found some related posts saying that donlp2 is only good to find minimum value of a function, which is the reason I took the reciprocal in the objective function.

The code ran correctly, but I have concerns with the results, since I can easily find other values that can give me greater outcome, i.e. the minimization of the objective function is not true.

I also found that when I change the initial value or the lower bound of x1,x2,y1,y2, the results will change dramatically. For example, if I set p=c(rep(0,4)), par.l<-c(rep(1,4)) instead of p=c(rep(0,4)), par.l<-c(rep(0,4)), the results will change from

$par
[1] 2.410409e+00 5.442753e-03 1.000000e+04 1.000000e+04

to

$par
[1]  2331.748 74670.025  3180.113 16819.887

Any ideas? I appreciate your suggestions and help!

jogo
  • 12,469
  • 11
  • 37
  • 42
Chen
  • 11
  • 1
  • I really did not use it before, however, `optim` is good function as well. –  Mar 02 '18 at 07:08
  • I'm not sure if the reciprocal is a good choice, considering machine precision. Maybe just the negative would be enough. I would also suggest to check if the results become more stable, when you alter the control parameters of the optimization process (see `?donlp2Control`) – Tom Mar 02 '18 at 10:30
  • For non-convex problems, most optimizers only achieve convergence to a local-optimum (unrelated problem, showing how initial-points behave: blue: initial -> red: local-opt [link](https://stackoverflow.com/a/48866951/2320035)). This is what you observe. Global-minimization is much much harder (and there is less software). The usual approach is trying different starting-values like: 100 times: random starting-point: optimize... use the best result. – sascha Mar 02 '18 at 15:32

0 Answers0