I am using this library - https://pythonhosted.org/pyswarm/ to find the global minima of a convex function. This is just to get started and work towards a non-convex function. I found the global minima using linear regression but the problem is that PSO seems to converge at different points depending on the values of omega and phi(s) that I set. I can confirm that these points aren't the global minima by comparing the cost with that of the minima given by linear regression.
Is this possible in PSO that it converges (value doesn't change after 10 iterations) or am I making some mistake somewhere?

- 315
- 4
- 19
-
1In general, if you find yourself (with PSO) not making further progress, first inspect the other solutions (non-optimal) within your swarm that are currently in use then you may need to adjust the topology of your swarm, eg how your particles communicate between each other (do you just store 1 optimal solution or a small set to draw from on each generation?) how many particles do you have in your swarm? don't forget that you can also re-start, reseed your optimisation by taking the current state and injecting a few particles with random domain positions. (just to help with diversity) – Matthaus Woolard Dec 25 '18 at 17:47
1 Answers
It is absolutely possible for PSO to converge in the wrong place. The thing about metaheuristics is that they can take a lot of time to run. Ten iterations in the wrong place is eminently possible. Furthermore, converging to the absolute global minimum is going to take a very long time, and the algorithm will never be able to prove that it's converged to a global minimum, only reach a termination criterion. Your expectations on a metaheuristic should be that it eventually gives you a good answer, not that it always converges to the global minimum.
In compensation for these drawbacks -- long run time, no guarantee of global minimization -- you get an optimization algorithm that can handle any kind of function evaluation or fitness landscape.

- 403
- 2
- 8