3

Anyone can give a sample coding of implementing regularization technique in pybrain? I am trying to prevent overfitting in my data and currently looking for a method like early stopping, etc to do so. Thanks!

dnth
  • 879
  • 2
  • 12
  • 22

3 Answers3

0

Below is not L1/L2 regularization. But it can be used to prevent overfitting by early stopping.

From the trainer documentation,

trainUntilConvergence(dataset=None, maxEpochs=None, verbose=None, continueEpochs=10, validationProportion=0.25)

Train the module on the dataset until it converges.

Return the module with the parameters that gave the minimal validation error.

If no dataset is given, the dataset passed during Trainer initialization is used. validationProportion is the ratio of the dataset that is used for the validation dataset.

If maxEpochs is given, at most that many epochs are trained. Each time validation error hits a minimum, try for continueEpochs epochs to find a better one.

If you are using the default parameters, you have already enabled a split of 75:25 as training vs validation dataset. The validation dataset is used for EARLY STOPPING.

greeness
  • 15,956
  • 5
  • 50
  • 80
0

There is a weight decay variable which is the L2 regularization in pybrain. Apart from that, I would implement early stopping as a combination to the weight decay term.

Below is how you'd specify the weight decay.

trainer = RPropMinusTrainer(net, dataset=trndata, verbose=True, weightdecay=0.01)
dnth
  • 879
  • 2
  • 12
  • 22
0

Regularization means changing the cost function. User choices in PyBrain do affect the cost function -- for instance, by choosing whether layers are linear or sigmoid -- but the cost function itself appears not to be directly exposed.

However, elsewhere on StackOverflow, someone claims that L2 regularization is possible via the weightdecay parameter. (The L2 norm sums the squares of the differences in each coordinate, whereas the L1 norm sums their positive absolute differences.)

Community
  • 1
  • 1
Jeffrey Benjamin Brown
  • 3,427
  • 2
  • 28
  • 40