Using Keras-tuner to create a hyperparameter tuning object and calling the search method, it is easy to retrieve the best hyperparameter configurations once the search is complete, however there does not appear to be any in-built way to also return the corresponding validation loss values on which they are ranked. How can I return the validation losses of each trial alongside the tuner.get_best_hyperparameters(3) method? I expect it may be possible using callbacks but I am not sure how. Depending on the verbose argument I can print results for each trial as it is considered but I would rather be able to call them as I can with the hyperparameters themselves.
2 Answers
I think you're right, using callbacks is the easiest way in my opinion to keep track of those metrics. My favorite to use when using keras-tuner is Tensorboard. It is an easy way to keep track of each trial and look at the data in a great interface.
To do so:
Define the callback with a statement like this:
tensorboard = TensorBoard(log_dir=path_to_logs, histogram_freq=1, embeddings_freq=1, write_graph=True, update_freq='batch')
Then, in your tuner.search() call, specify your callback using the following argument:
callbacks=[tensorboard],
Note that you save the events (callbacks) in a log directory and you can reference it from there. You can either navigate to it in your command line, or load the tensorboard extension in your notebook using %load_ext tensorboard
. Then, run this command:
tensorboard --logdir=path_to_logs --host=localhost
This will then display an interface to look at each trial. It will have information on the train/validation accuracy and loss, it will give information on model weights over time, and even include information on hyperparameter selection. Worth taking a look at.

- 59
- 4
-
I currently use TensorBoard as a workaround as you describe however I am asking to be able to log the metrics during searching and call them within the script. – SO1999 Apr 25 '21 at 10:46
As mentioned by Peter, it's a good idea to use callbacks with tensorboard, from which you can also export a csv file with all results if you want. However, if you want to load the results from the tuning process directly in python, I don't think there is a built-in solution for this. You can however access all information from the Keras tuner object. I will show some example code of how to create a simple DataFrame storing the validation loss for each combination of hyperparameter values that was tested.
import pandas as pd
# Rune some Keras search from which you want to collect the results
model_tuner.search(...)
# Initiate an empty dataframe to store your results
tune_res = pd.DataFrame()
# Run a for loop to extract all the information we want
for trial in model_tuner.oracle.trials:
# Get the state for this trial
trial_state = model_tuner.oracle.trials[trial].get_state()
# Create a Series contaning the hyperparameter values for this trial
trial_hyperparameters = pd.Series(
trial_state["hyperparameters"]["values"],
index = trial_state["hyperparameters"]["values"].keys())
# Create a Series contaning the validation loss for this trial
trial_loss = pd.Series(trial_state["score"], index = ["val_loss"])
# Combine both Series into one Series
trial_tune_res = pd.concat([trial_hyperparameters, trial_loss])
# Name the Series (such that we can trace the trial numbers in the final DataFrame)
trial_tune_res.name = trial
# Add this trial information to the DataFrame
tune_res = pd.concat([tune_res, trial_tune_res], axis = 1)
# Transpose the DataFrame such that each row represents a trial (optional)
tune_res = tune_res.T
tune_res shoud now contain the validation loss, as well as the specific hyperparameter values that were tested. Have a look at an individual trial to see if there is other information in there that you would like to extract.

- 18
- 6