0

I am tuning the parameters of a regression model on an objective that can have very small values at the beginning of training. It measures the smoothness of the regression function, s.t., when the initial guess is a straight line, the smoothness-objective value is very low.

I would like to introduce a warm-up phase, where the first couple of epochs are skipped. The only solution I found was to manually tamper with the training logs on epoch end:

import kerastuner as kt

class CustomTuner(kt.BayesianOptimization):

    def __init__(self, *args, warmup=5, **kwargs):
        super(CustomTuner, self).__init__(*args,**kwargs)
        self.warmup = warmup

    def on_epoch_end(self, trial, model, epoch, logs=None): 
        if epoch < self.warmup: # neglect objective value when in warmup-phase
            logs['smoothness'] = 1.0
        super(CustomTuner,self).on_epoch_end(trial, model, epoch, logs)

However, this is not a clean solution and I am worried about the effect this manipulation has on the optimization process.

So my questions are:

  1. Is there a cleaner way to implement an objective warmup-phase? (could be also useful for other cases, for example the KL-divergence in a variational autoencoder that can be lower at the beginning of training)
  2. How does the manipulation affect the optimization process?
kingusiu
  • 316
  • 2
  • 15

0 Answers0