1

I m working on a deep multimodal autoencoder in the case of unsupervised learning which takes two inputs with shape of (1000, 50) and (1000,60) respectively to reconstruct the intial two inputs. The model has 3 hidden layers and aim to concatenate the two latent layer of input1 and input2. Both outputs are then used to compute two losses(MSE).

Please note that X and X1 were generated as the following in order to compute for each element the average value of its neighborhood :

matrix(1000,50) , matrix1(1000,60) and A is the adjacency matrix with the shape of (1000,1000)
summed_groups_matrix = A@matrix
summed_groups_matrix1 = A@matrix1
neighborhood_sizes = A.sum(axis=1)
X=summed_groups_matrix / neighborhood_sizes
X1=summed_groups_matrix1 / neighborhood_sizes

The complete code of the multi-modal autoencoder is the following :

input_X = Input(shape=(X[0].shape))

dense_X = Dense(40,activation='relu')(input_X)

dense1_X = Dense(20,activation='relu')(dense_X)

latent_X= Dense(2,activation='relu')(dense1_X)

input_X1 = Input(shape=(X1[0].shape))

dense_X1 = Dense(40,activation='relu')(input_X1)

dense1_X1 = Dense(20,activation='relu')(dense_X1)

latent_X1= Dense(2,activation='relu')(dense1_X1)

Concat_X_X1 = concatenate([latent_X, latent_X1])

decoding_X = Dense(20,activation='relu')(Concat_X_X1)

decoding1_X = Dense(40,activation='relu')(decoding_X)

output_X = Dense(X[0].shape[0],activation='sigmoid')(decoding1_X)

decoding_X1 = Dense(20,activation='relu')(Concat_X_X1)

decoding1_X1 = Dense(40,activation='relu')(decoding_X1)

output_X1 = Dense(X1[0].shape[0],activation='sigmoid')(decoding1_X1)

multi_modal_autoencoder = Model([input_X, input_X1], [output_X, output_X1], name='multi_modal_autoencoder')

multi_modal_autoencoder.compile(optimizer=keras.optimizers.Adam(lr=0.001),loss='mse')

model = multi_modal_autoencoder.fit([X,X1], [X, X1], epochs=70, batch_size=150)

While using multi_modal_autoencoder.evaluate(X,X1) it returns this error :

TypeError: 'method' object is not subscriptable

What should i pass in the model.evaluate ?

Andrea
  • 113
  • 1
  • 7
  • Please be more specific on how you plan to build the AE with 2 losses. From your description it looks like not AE but 2 straight NNs, which is fine for this task. – Poe Dator Jul 18 '20 at 14:06
  • I updated the question. Can you please take a look? – Andrea Jul 18 '20 at 14:12
  • If it generates an error, you should include it with your question. Else we would be guessing. – Dr. Snoopy Jul 18 '20 at 14:17
  • 1) It looks like an incorrect way to use `keras.Model`. Please see example here: https://keras.io/api/models/model/ 2) input should not be list of [X, X1] but a single tensor. Process your data before apssing to the model. So you can do `multi_modal_autoencoder.evaluate(X2,X2)` where X2 is some combination of X and X1 (concatenate?). 3) start with building model for single feature prediction, then add complexity. – Poe Dator Jul 18 '20 at 14:19
  • You're right, i edited the question @Dr.Snoopy – Andrea Jul 18 '20 at 14:43
  • @RuslanS. if i do the step 2, then the model will be with single input and single output which not the case i looked for. Also the concatenate has to be between the latent representation of both inputs. PS, i have followed this example https://wizardforcel.gitbooks.io/deep-learning-keras-tensorflow/8.2%20Multi-Modal%20Networks.html – Andrea Jul 18 '20 at 14:45
  • I get it now. In this case `tf.keras.Model.evaluate()` returns the loss value & metrics values for the model in test mode. So you have to pass to it lists of inputs from test sample. Try `multi_modal_autoencoder.evaluate([X_test,X1_test],[X_test,X1_test])` – Poe Dator Jul 18 '20 at 15:04
  • I didn't split my data into train and test, i'm working on the whole of data so in this way, it should be ````multi_modal_autoencoder.evaluate([X,X1],[X,X1])```` Ok, it works but it returns an array of 3 mse values that i can'i interpret them. Here is ````[[0.012900909228595816, 0.003546052612364292, 0.009347316808998585]]```` – Andrea Jul 18 '20 at 15:12

1 Answers1

0

tf.keras.Model.evaluate() returns the loss value & metrics values for the model in test mode. So you have to pass to it lists of inputs from test sample. Try multi_modal_autoencoder.evaluate([X_test,X1_test],[X_test,X1_test])

Poe Dator
  • 4,535
  • 2
  • 14
  • 35