I am using this in my code:
stop_early = tf.keras.callbacks.EarlyStopping(monitor='loss', patience=3)
tuner = kt.Hyperband(
model_builder,
objective='val_loss',
max_epochs=100,
factor=2,
overwrite=True,
directory=dir,
project_name='x',
hyperband_iterations=2
)
tuner.search(X_train,Y_train, validation_data=(X_val,Y_val), callbacks=[stop_early],verbose=0)
But I do not understand the difference between the max_epochs in the Hyperband() and the epochs in the search()? If I am understanding it correctly, the max_epochs is the maximum epochs of that each model will be trained during the tuning. So my factor is two, which means that every time the epochs are doubled before and halve of the models are discarded. But from which initial amount of epochs will it start? this will be random I suppose? So this goes on until max_epochs is reached. But what does the epochs in search() mean? Thanks in advance!!
I have tried some different values and tried to see it visually. And I have searched for the answer online, but so far I haven't found a clear answer.