is there any way for generated layers to make that previous layer has always more neurons than next layer? I have example code:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, InputLayer
from kerastuner.tuners import RandomSearch
from tensorflow.keras.optimizers import Adam
def build_model(hp):
model = Sequential()
model.add(InputLayer(input_shape=e_net_out_shape))
model.add(Flatten())
for i in range(hp.Int('num_layers', 0, 2)):
model.add(Dense(units=hp.Int('units_' + str(i), min_value=32, max_value=928, step=64), activation='relu'))
model.add(Dense(39, activation='softmax'))
model.compile(optimizer=Adam(learning_rate=0.01), loss='categorical_crossentropy', metrics=['acc'])
return model
and during search using Random Search algorithm next layer ex. units_2
has more neurons than previous one ex. units_1
. So I'd like to constrain max_value
of next layers to current value of previous ones.