0

I am having convergence issues with a large optimization and I believe the problem may be related to the way I am declaring the objective and the constraints with relation to the gradients provided by the respective components.

Is there a way to provide gradients for a constraint or objective defined in the following manner (with math in the constraint/objective statement): self.driver.add_constraint('separation/10 > %s/10' % minimum_distance) when the component is only providing the gradient of separation, not separation/10.

jthomas
  • 2,437
  • 1
  • 13
  • 15

1 Answers1

0

My first thought is to make it 10. instead of 10 to avoid any chance of integer division. Also, you should replace %s with %f. %s is for strings.

OpenMDAO uses complex step to compute the derivatives for these kinds of string defined functions. It should be getting the derivative value correct, but I don't know what the value of minimum_distance happens to be. If it happens to be an integer, or if the %s is messing things up for the complex step (What does the complex part of a string represent anyway?)

So take a look at that first. SNOPT, which I assume you're using via pyoptsparse, does have a gradient checker. If you're concerned about gradients, you might look into turning it on.

As an asside, in OpenMDAO 1.0, we're going to be changing the way that constraints are handled. Instead of providing a string like this, you would instead compute the value of the variable separation, and then call something like

driver.add_constraint('separation', lower=minimum_distance/10., scaler=1/10.)
Justin Gray
  • 5,605
  • 1
  • 11
  • 16
  • I have implemented your suggestions (thanks for catching the errors) and it looks like that solved the main problem. I am still seeing a behavior where the objective jumps up and down as the optimization progresses, but it still converges. This jumpy behavior gets worse as I scale the problem up (more variables and constraints). I think this problem is likely specific to my problem, but is there a chance that the jumpy convergence is related to the way openmdao treats the objective? – jthomas Sep 11 '15 at 15:49
  • So its working, maybe? At least, the derivatives now look good and the optimizer itself is jumping around a lot? You might be seeing line-search operations from SNOPT. Those end up looking like really jagged peaks in the output. – Justin Gray Sep 11 '15 at 15:53
  • Thank you also for the explanation of how OpenMDAO handles the gradient when math is included in the objective or gradient assignment. That is helpful to know. – jthomas Sep 11 '15 at 15:54
  • Hmm, I had not thought about the jaggedness being line searches. That sounds reasonable. The peaks alternate high, low, high, low, but with a consistent upward trend. However, the magnitude of the peaks is very large. – jthomas Sep 11 '15 at 16:03
  • Yeah. That's the line search it takes big steps and does a backtracking thing. That's expected behavior – Justin Gray Sep 11 '15 at 16:27