3

I am using scipy.optimize.minimize for nonlinear constrained optimization.

I tested two methods (trust-constr, SLSQP).

On a machine (Ubuntu 20.04.1 LTS) where proc gives 32,

scipy.optimize.minimize(..., method='trust-constr', ...) uses multiple cores like 1600%

scipy.optimize.minimize(..., method='SLSQP', ...) only uses one core

According to another post (scipy optimise minimize -- parallelisation options), it seems that this is not a python problem, rather, a BLAS/LAPACK/MKL problem. However, if it is a BLAS problem, then for me, it seems that all methods should be of a single core.

In the post, someone replied that SLSQP uses multiple cores.

Does the parallelization support of scipy.optimize.minimize depends on a chosen method?

How can I make SLSQP use multiple cores?

One observation I made by looking into

anaconda3/envs/[env_name]/lib/python3.8/site-packages/scipy/optimize

trust-constr is implemented in python (_trustregsion_constr directory)

SLSQP is implemented by C (_slsqp.cpython-38-x86_64-linux-gnu.so file)

  • Make sure you parallelized the things which are in your hand: func-eval, gradient/hessian-eval. This usually goes hand-in-hand with BLAS/LAPACK and numpy/scipy's usage of those. In the sparse-case this might be non-parallelized (scipy.sparse A*x parallelized?). The internals of SLSQP are not C, but Fortran from 1983 and do not look parallel at all. You cannot change this. [slsqp](https://github.com/scipy/scipy/blob/master/scipy/optimize/slsqp/slsqp_optmz.f) – sascha Dec 03 '20 at 12:25

1 Answers1

0

On parsing the _slsqp.py source file , you may notice that scipy's SLSQP not using MPI or multiprocessing (or any parallel processing).

Adding some sort of multiprocessing/MPI support is not trivial, because you have to do some surgery on the backend to enable those MPI barriers/synchronization holds (and make sure that all processes/threads are running in sync, and the main "optimizer" is only run on a single core).

If you're heading down this path, its relevant to mention: SLSQP as implemented in Scipy has some inefficient order of operations. When it computes derivatives, it perturbs all design variables and finds the gradient of the objective function first (some wrapper function is created at runtime to do this operation), and then SLSQP's python wrapper computes gradients for constraint functions by perturbing each design variable.

If speeding up SLSQP is critical, fixing the order of operations in the backend (where it invokes different treatment for finding gradients of objectives vs constraints) is important for many problems where there are a lot of common operations for calculating objectives and constraints. I'd say both backend updates belong under this category.. something for the dev forums to ponder.

ansri
  • 37
  • 6