autodiff_diagram = diagram.ToAutoDiffXd()
autodiff_context = autodiff_diagram.CreateDefaultContext()
autodiff_plant = autodiff_diagram.GetSubsystemByName("plant")
Let's say I want to have a bunch of random initializations and run optimization over the same autodiff diagram (computational graph) each time. What's the cleanest way to do so?
I tried looking in the pydrake docs for "zero gradient" and similar things. I found some references to "DiscardGradient" but there was no actual result in the docs for that. As a side note, it would be helpful to have a meta-learning advice on how to navigate the Drake docs/code to answer this type of question myself, I am having trouble doing that.