0

Drake has an interface where you can give it a generic function as a constraint and it can set up the nonlinearly-constrained mathematical program automatically (as long as it supports AutoDiff). I have a situation where my constraint does not support AutoDiff (the constraint function conducts a line search to approximate the maximum value of some function), but I have a closed-form expression for the gradient of the constraint. In my case, the math works out so that it's difficult to find a point on this function, but once you have that point it's easy to linearize around it.

I know many optimization libraries will allow you to provide your own analytical gradient when available; can you do this with Drake's MathematicalProgram as well? I could not find mention of it in the MathematicalProgram class documentation.

Any help is appreciated!

dawsonc
  • 25
  • 3

1 Answers1

2

It's definitely possible, but I admit we haven't provided helper functions that make it pretty yet. Please let me know if/how this helps; I will plan to tidy it up and add it as an example or code snippet that we can reference in drake.

Consider the following code:

from pydrake.all import AutoDiffXd, MathematicalProgram, Solve

prog = MathematicalProgram()
x = prog.NewContinuousVariables(1, 'x')

def cost(x):
    return (x[0]-1.)*(x[0]-1.)

def constraint(x):
    if isinstance(x[0], AutoDiffXd):
        print(x[0].value())
        print(x[0].derivatives())
    return x

cost_binding = prog.AddCost(cost, vars=x)
constraint_binding = prog.AddConstraint(
    constraint, lb=[0.], ub=[2.], vars=x)
result = Solve(prog)

When we register the cost or constraint with MathematicalProgram in this way, we are allowing that it can get called with either x being a float, or x being an AutoDiffXd -- which is simply a wrapping of Eigen's AutoDiffScalar (with dynamically allocated derivatives of type double). The snippet above shows you roughly how it works -- every scalar value has a vector of (partial) derivatives associated with it. On entry to the function, you are passed x with the derivatives of x set to dx/dx (which will be 1 or zero).

Your job is to return a value, call it y, with the value set to the value of your cost/constraint, and the derivatives set to dy/dx. Normally, all of this happens magically for you. But it sounds like you get to do it yourself.

Here's a very simple code snippet that, I hope, gets you started:

from pydrake.all import AutoDiffXd, MathematicalProgram, Solve

prog = MathematicalProgram()
x = prog.NewContinuousVariables(1, 'x')

def cost(x):
    return (x[0]-1.)*(x[0]-1.)

def constraint(x):
    if isinstance(x[0], AutoDiffXd):
        y = AutoDiffXd(2*x[0].value(), 2*x[0].derivatives())
        return [y]
    return 2*x

cost_binding = prog.AddCost(cost, vars=x)
constraint_binding = prog.AddConstraint(
    constraint, lb=[0.], ub=[2.], vars=x)
result = Solve(prog)

Let me know?

Russ Tedrake
  • 4,703
  • 1
  • 7
  • 10