0

After looking at doc and tutorial, it seems to me it is very easy to define a hyper parameter for your model. This includes the code to construct it out of layers, as well as compile related ones such as learning rate. What I am looking for (also) is a way to run hyper-param search on non model related parameters. Here are some examples:

  1. data augmentation. If you build this as part of tf dataset pipeline. e.g. amount of random translation
  2. over/undersampling. This is often used to handle unbalanced classes, and one way of doing this is tf.data.Dataset.sample_from_datasets. The "weights" argument to this method are hyperparameters.
  3. num of epochs. Maybe I am missing this one, but it should be considered in keras_tuner in the most straightforward way. A workaround is to use schedule callbacks and achieve this in compile

Is the tuner library framework missing on all these? These seem to be common things you like to tune.

kawingkelvin
  • 3,649
  • 2
  • 30
  • 50

1 Answers1

1

This will help. https://keras.io/guides/keras_tuner/custom_tuner/ The custom tuner can be the way to “hyperparameterized” the tf dataset pipeline. Here's the code snippet I used and it works.

class MyTuner(kt.BayesianOptimization):
  def run_trial(self, trial, train_ds, *args, **kwargs):
 
    hp = trial.hyperparameters

    train_ds = train_ds.shuffle(batch_size * 8).repeat().batch(batch_size).prefetch(buffer_size=AUTO)

    hp_constract_factor = hp.Float('contrast_factor', min_value=0.01, max_value=0.2, sampling='log')


    random_flip = tf.keras.layers.experimental.preprocessing.RandomFlip('horizontal')
    random_contrast = tf.keras.layers.experimental.preprocessing.RandomContrast(hp_constract_factor)


    train_ds = train_ds.map(lambda x, y: (random_flip(x, training=True), y), num_parallel_calls=AUTO)
    train_ds = train_ds.map(lambda x, y: (random_contrast(x, training=True), y), num_parallel_calls=AUTO)

    return super(MyTuner, self).run_trial(trial, train_ds, *args, **kwargs)

tuner = MyTuner(
  model_builder,
  objective='val_sparse_categorical_accuracy',
  max_trials=50,
  executions_per_trial=1,
  directory='keras_tuner',
  project_name='classifier_data_aug_custom_tuner'
)

tuner.search(...)
kawingkelvin
  • 3,649
  • 2
  • 30
  • 50