3

I am having trouble parsing pybrains documentation on their blackbox optimization. I am looking to input my own nn weight optimizer function. Pybrain has GA and hill climbing already implemented. I am unable to copy their format, for example implement simulated annealing to optimize the weights of the nn.
Anyone know how to for instance make this annealing function usable by pybrain? As shown here is a simple example using the built in GA function. How can i alter my annealing function to use here?

def annealingoptimize(domain,costf,T=10000.0,cool=0.95,step=1):
  # Initialize the values randomly
  vec=[float(random.randint(domain[i][0],domain[i][1]))
       for i in range(len(domain))]

  while T>0.1:
    # Choose one of the indices
    i=random.randint(0,len(domain)-1)

    # Choose a direction to change it
    dir=random.randint(-step,step)

    # Create a new list with one of the values changed
    vecb=vec[:]
    vecb[i]+=dir
    if vecb[i]<domain[i][0]: vecb[i]=domain[i][0]
    elif vecb[i]>domain[i][1]: vecb[i]=domain[i][1]

    # Calculate the current cost and the new cost
    ea=costf(vec)
    eb=costf(vecb)
    p=pow(math.e,(-eb-ea)/T)

    print vec,ea


    # Is it better, or does it make the probability
    # cutoff?
    if (eb<ea or random.random()<p):
      vec=vecb

    # Decrease the temperature
    T=T*cool
  return vec

##GA EXAMPLE, uses GA(ContinuousOptimizer, Evolution) these two opaque classes.

from pybrain.datasets.classification import ClassificationDataSet
# below line can be replaced with the algorithm of choice e.g.
# from pybrain.optimization.hillclimber import HillClimber
from pybrain.optimization.populationbased.ga import GA
from pybrain.tools.shortcuts import buildNetwork

# create XOR dataset
d = ClassificationDataSet(2)
d.addSample([0., 0.], [0.])
d.addSample([0., 1.], [1.])
d.addSample([1., 0.], [1.])
d.addSample([1., 1.], [0.])
d.setField('class', [ [0.],[1.],[1.],[0.]])

nn = buildNetwork(2, 3, 1)
# d.evaluateModuleMSE takes nn as its first and only argument
ga = GA(d.evaluateModuleMSE, nn, minimize=True)
for i in range(100):
    nn = ga.learn(0)[0]
user1620461
  • 173
  • 1
  • 7

1 Answers1

1

You can use the StochasticHillClimber():

class pybrain.optimization.StochasticHillClimber(evaluator=None, initEvaluable=None, **kwargs)
Stochastic hill-climbing always moves to a better point, but may also go to a worse point with a probability that decreases with increasing drop in fitness (and depends on a temperature parameter).

temperature
The larger the temperature, the more explorative (less greedy) it behaves.

aneroid
  • 12,983
  • 3
  • 36
  • 66
Robbie Capps
  • 417
  • 1
  • 5
  • 16