0

I have a regression dataset:

X_train (float64) Size = (1616, 3) -> i.e. 3 predictors
Y_train (float64) Size = (1616, 2) -> i.e. 2 targets

I tried doing Hyperas using Functional API (my main purpose is to use the loss_weights option during compiling):

inputs1 = Input(shape=(X_train.shape[0], X_train.shape[1]))

x  = Dense({{choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)])}}, activation={{choice(['tanh','relu', 'sigmoid'])}})(inputs1)
x  = Dropout({{uniform(0, 1)}})(x)

x  = Dense({{choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)])}}, activation={{choice(['tanh','relu', 'sigmoid'])}})(x)
x  = Dropout({{uniform(0, 1)}})(x)

x  = Dense({{choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)])}}, activation={{choice(['tanh','relu', 'sigmoid'])}})(x)
x  = Dropout({{uniform(0, 1)}})(x)

if conditional({{choice(['three', 'four'])}}) == 'four':
    x  = Dense({{choice([np.power(2,1),np.power(2,2),np.power(2,3),np.power(2,4),np.power(2,5)])}}, activation={{choice(['tanh','relu', 'sigmoid'])}})(x)
    x  = Dropout({{uniform(0, 1)}})(x)

output1 = Dense(1,  activation='linear')(x)
output2 = Dense(1,  activation='linear')(x)

model = Model(inputs = inputs1, outputs = [output1,output2])

adam    = keras.optimizers.Adam(lr={{choice([10**-3,10**-2, 10**-1])}})
rmsprop = keras.optimizers.RMSprop(lr={{choice([10**-3,10**-2, 10**-1])}})
sgd     = keras.optimizers.SGD(lr={{choice([10**-3,10**-2, 10**-1])}})

choiceval = {{choice(['adam', 'rmsprop','sgd'])}}
if choiceval == 'adam':
    optimizer = adam
elif choiceval == 'rmsprop':
    optimizer = rmsprop
else:
    optimizer = sgd

model.compile(loss='mae', metrics=['mae'],optimizer=optimizer, loss_weights=[0.5,0.5])

earlyStopping = EarlyStopping(monitor='val_loss', min_delta=0, patience=50, verbose=0, mode='auto')
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=2, save_best_only=True, mode='max')
lr_reducer = ReduceLROnPlateau(monitor='val_loss', factor=0.5, cooldown=1, patience=10, min_lr=1e-4,verbose=2)
callbacks_list = [earlyStopping, checkpoint, lr_reducer]

history = model.fit(X_train, Y_train,
          batch_size={{choice([16,32,64,128])}},
          epochs={{choice([20000])}},
          verbose=2,
          validation_data=(X_val, Y_val),
          callbacks=callbacks_list)

However, upon running it, I get the following error:

ValueError: Error when checking input: expected input_1 to have 3 dimensions, but got array with shape (1616, 3)

I would greatly appreciate if someone could point me to the direction of what is going wrong here. I suspect the input (i.e. X_train, Y_train) and also the Input shape might be at fault. Would appreciate any help here.

UPDATE

Ok so, indeed the fault was at the Input line:

I changed it to: inputs1 = Input(shape=(X_train.shape[1],)).

However, now I received another error:

ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected. Expected to see 2 array(s), but instead got the following list of 1 arrays: [array([[0.19204772, 0.04878049],
   [0.20226056, 0.        ],
   [0.12029842, 0.04878049],
   ...,
   [0.45188627, 0.14634146],
   [0.26942276, 0.02439024],
   [0.12942418, 0....
today
  • 32,602
  • 8
  • 95
  • 115
Corse
  • 205
  • 2
  • 11

1 Answers1

0

Since your model has two output layers, you need to pass a list of two arrays as the true target (i.e. y) when calling fit() method. For example like this:

model.fit(X_train, [Y_train[:,0:1], Y_train[:,1:]], ...)
today
  • 32,602
  • 8
  • 95
  • 115
  • Thanks, I did that and i got this: Epoch 1/20000 - 1s - loss: 0.2504 - dense_4_loss: 0.3083 - dense_5_loss: 0.1925 - dense_4_mean_absolute_error: 0.3083 - dense_5_mean_absolute_error: 0.1925 - val_loss: 0.1225 - val_dense_4_loss: 0.1793 - val_dense_5_loss: 0.0657 - val_dense_4_mean_absolute_error: 0.1793 - val_dense_5_mean_absolute_error: 0.065 – Corse Nov 15 '18 at 15:01
  • ok its the losses for the combined one, and the 2 output layers. – Corse Nov 15 '18 at 15:05
  • @Corse The combined loss, the losses for each output layer and the metric values for each output layer. However, since you are using `mae` as the loss function as well you can remove it as metric (or use a different metric instead). The same thing applies to validation as well. – today Nov 15 '18 at 15:05
  • by the way, I'm assuming i should do this as well score, acc = model.evaluate(X_val, [epidist_train,mw_train], verbose=2). I got this strange error: ValueError: Input arrays should have the same number of samples as target arrays. – Corse Nov 15 '18 at 15:06
  • @corse That's because the number of samples in `X_val` is not equal to `epidist_train` and `mw_train`. Make sure the shapes of target arrays are `(n_samples, 1)` according to the definition of your model. – today Nov 15 '18 at 15:09
  • ah, that was a typo. i had to use epidist_val and mw_val, thank you very much. Upon running hyperas, the epochs now work. Yet i face another error: ValueError: too many values to unpack (expected 2) – Corse Nov 15 '18 at 15:35
  • is it due to the multiple outputs that i am expecting? – Corse Nov 15 '18 at 15:36
  • @Corse Well, edit your question and include the full error log. Although this is becoming a multitude of unrelated errors and this is not good in terms of good Q&A in SO. – today Nov 15 '18 at 15:55