2

I created a simple model for an off-grid neighborhood its energy balance, based on solar, wind and some energy storage. I use PSO to find the minimum required solar and wind capacity required for no loss of power throughout a full year.

More capacity is more cost, so the cost is minimized. Candidate solutions where there is loss of power should not be considered as solutions. Could you advice me on how to implement the no loss of power criteria?

What I did now is: when a configuration results in loss of power, I assign that candidate solution a high cost. This seems to work, but is not what you would call, very elegant...

Carlos
  • 5,991
  • 6
  • 43
  • 82
Gilbert
  • 177
  • 2
  • 12

1 Answers1

2

My answer is about generally approaching a problem with "invalid" states (loss of power in your example), and does not take the chosen optimization method (PSO) into account.

  1. Add a high additive penalty for each "unit" of loss of power. This will only work if loss of power is quantifiable. Just a boolean value (valid/invalid) won't work because it does not tell how far we are from a valid solution.

  2. Search only in the sub-space of valid (lossless) configurations. If there is enough freedom in such subspace to run the search, and good valid states completely "surrounded" by invalid states are unlikely, the search will do just fine.

Gassa
  • 8,546
  • 3
  • 29
  • 49
  • thanks for your advice! Option 2 would not be possible for me, however 1 seems to work. I will steel need to work out how to implement such a penalty. The problem is: when only varying one variable, right next to the border of where the penalty is given, to optimum value for that variable lies (for those specific value of the other variables). Would it matter if the penalty would be an infanite cost? – Gilbert Feb 17 '16 at 17:06
  • 1
    If penalty is infinite, it is as good as a boolean value for valid/invalid: we will know that the state is invalid, but we won't know how far it is from a valid state, so can not optimize that characteristic. This could be a non-problem if every invalid state has many neighboring valid states, but otherwise, getting from invalid to valid would be hard. – Gassa Feb 17 '16 at 19:19
  • for each parameter in my problem there is a minimum value, depending on the values of other parameters, from where the solution starts to become valid. Meaning: on one side there are many valid states. – Gilbert Feb 22 '16 at 12:29
  • a slightly different question (maybe I should start a different question topic?): if I try to optimize too many parameters, the swarm travels to the first few best solutions too fast. Could you advice me on a method to approach this problem? Options I came up with: - stepwise: optimize the key parameters first, then the less important - or: lower social learning constant? there are probably names for such tactics? – Gilbert Feb 22 '16 at 12:32
  • @Gilbert I am not very familiar with particle swarm optimization, and I don't see how _traveling to the first few best solutions too fast_ can be a bad thing. You mean best for the first few parameters but very bad for the others? – Gassa Feb 22 '16 at 15:00
  • @Gilbert Anyway, I'm used to optimization problems where there is a single real-valued objective function that must be minimized or maximized. All parameters have to add up in that function with some coefficients. To make parameters more or less important, you just alter the coefficients. – Gassa Feb 22 '16 at 15:01
  • @Gilbert Also, if your problem involves different decisions at different moments of time, I suggest you look into [beam search](https://en.wikipedia.org/wiki/Beam_search) technique, it may help. – Gassa Feb 22 '16 at 15:02