0

valueError: Graph disconnected: cannot obtain value for tensor KerasTensor(type_spec=TensorSpec(shape=(None, 300), dtype=tf.float32, name='input_1'), name='input_1', description="created by layer 'input_1'") at layer "embedding". The following previous layers were accessed without issue: []

Inference model for which I got the above error:

`from tensorflow.keras.models import Model

# Define the encoder model

 encoder_model = Model(encoder_inputs, \[encoder_lstm_outputs, encoder_state_h, encoder_state_c\])

# Define the decoder inputs and states

decoder_state_input_c = Input(shape=(hidden_units,))
decoder_state_input_h = Input(shape=(hidden_units,))
decoder_states_inputs = \[decoder_state_input_c, decoder_state_input_h\]

# Get the decoder embeddings using the same embedding layer as the original model

decoder_embedding = Embedding(input_dim=vocab_size, output_dim=embedding_dim, weights=\[embedding_matrix\], trainable=False)(decoder_inputs)

# Get the decoder LSTM outputs and states using the decoder_states_inputs

decoder_lstm_outputs, decoder_state_h, decoder_state_c = decoder_lstm(decoder_embedding, initial_state=decoder_states_inputs)

# Apply attention using the trained attention layer

attention = Attention()(\[decoder_lstm_outputs, encoder_lstm_outputs\])
decoder_concat = Concatenate(axis=-1)(\[decoder_lstm_outputs, attention\])

# Get the output layer's prediction

output_layer = Dense(units=vocab_size, activation='softmax')(decoder_concat)

# Define the decoder model

decoder_model = Model(inputs=\[decoder_inputs\] + decoder_states_inputs,
outputs=\[output_layer, decoder_state_h, decoder_state_c\])

`

Original Model:

`from tensorflow.keras.models import Model
 from tensorflow.keras.layers import Input, Embedding, LSTM, Dense, Attention, Concatenate,   TimeDistributed
 import tensorflow as tf`

# Build the model

encoder_inputs = Input(shape=(max_transcript_length,))
encoder_embedding = Embedding(input_dim=vocab_size, output_dim=embedding_dim, weights=\[embedding_matrix\], trainable=False)(encoder_inputs)
encoder_lstm = LSTM(units=hidden_units, return_sequences=True, return_state=True)
encoder_lstm_outputs, encoder_state_h, encoder_state_c = encoder_lstm(encoder_embedding)

decoder_inputs = Input(shape=(49,))
decoder_embedding = Embedding(input_dim=vocab_size, output_dim=embedding_dim, weights=\   [embedding_matrix\], trainable=False)(decoder_inputs)
decoder_lstm = LSTM(units=hidden_units, return_sequences=True, return_state=True)
decoder_lstm_outputs, \_, \_ = decoder_lstm(decoder_embedding, initial_state=\[encoder_state_h, encoder_state_c\])

attention = Attention()(\[decoder_lstm_outputs, encoder_lstm_outputs\])
decoder_concat = Concatenate(axis=-1)(\[decoder_lstm_outputs, attention\])

output_layer = Dense(units=vocab_size, activation='softmax')(decoder_concat)
model = Model(\[encoder_inputs, decoder_inputs\], output_layer)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics = \['accuracy'\])  #       Use appropriate loss function\`
Deepak
  • 11
  • 1
  • Does this answer your question? [keras - Graph disconnected: cannot obtain value for tensor KerasTensor](https://stackoverflow.com/questions/68432386/keras-graph-disconnected-cannot-obtain-value-for-tensor-kerastensor) – Ilya Aug 14 '23 at 08:39

0 Answers0