13

I've been curious if TF can be used for global optimization of a function. For example, could it be used to efficiently find the ground state of a Lennard-Jones potential? Would it be any better or worse than existing optimization methods, like Basin-Hopping?

Part of my research involves searching for the ground state of large, multi-component molecules. Traditional methods (BH, ect.) are good for this, but also quite slow. I've looked into TF and there are parts that seem robust enough to apply to this problem, although my limited web search doesn't appear to show any use of TF to this problem.

OriolAbril
  • 7,315
  • 4
  • 29
  • 40
Christopher Mauney
  • 459
  • 1
  • 5
  • 10

1 Answers1

10

The gradient descent performed to train neural networks is considering only a local region of the function. There is thus no guarantee that it will converge toward a global minimum (which is actually fine for most machine-learning algorithms ; given the really high dimensionality of the considered spaces, one is usually happy to find a good local minimum without having to explore around too much).

That being said, one could certainly use Tensorflow (or any such frameworks) to implement a local optimizer for the global basin-hopping scheme, e.g. as follows (simplified algorithm):

  1. Choose a starting point;
  2. Use your local optimizer to get the local minimum;
  3. Apply some perturbation to the coordinates of this minimum;
  4. From this new position, re-use your local optimizer to get the next local minimum;
  5. Keep the best minimum, repeat from 3.

Actually, some people are currently trying to implement this exact scheme, interfacing TF with scipy.optimize.basinhopping(). Current development and discussions can be found in this Github issue.

benjaminplanche
  • 14,689
  • 5
  • 57
  • 69