1

I'm trying to build a graph in tensorflow. But it gives me an error that I have wrong rank of shape. So, I'm trying to locate at which step something went wrong. Is there a chance to find out shapes of elements's outputs while building a graph?

For example, my code is:

def inference_decoding_layer(start_token, end_token, embeddings, dec_cell, initial_state, output_layer,
                         max_summary_length, batch_size):
'''Create the inference logits'''


start_tokens = tf.tile(tf.constant([start_token], dtype=tf.int32), [batch_size], name='start_tokens')

inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embeddings, #shape (2000,25,768)
                                                            start_tokens,
                                                            end_token)


inference_decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell,
                                                    inference_helper,
                                                    initial_state,
                                                    output_layer)

inference_logits, _ , _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,
                                                        output_time_major=False,
                                                        impute_finished=True,
                                                        maximum_iterations=max_summary_length) 

return inference_decoder

The problem appears at dynamic_decoder. Here is the error:

ValueError: Shape must be rank 3 but is rank 2 for 'decode/decoder/while/BasicDecoderStep/decoder/attention_wrapper/concat_6' (op: 'ConcatV2') with input shapes: [32,25,768], [32,256], [].

So, I'm wondering is there a way to find out, for example, what shape of the value we get from GreedyEmbeddingHelper and then from BasicDecoder... Or maybe of other thing in my whole code. So, I would locate where the problem lays.

P.S. If there are any other ways/suggestions of how to locate the problem in this case I would be very grateful!

Alena
  • 125
  • 2
  • 2
  • 11

1 Answers1

0

For the sake of easy debugging, eager mode has been introduced. With eager mode, you can keep printing the output shape after each line of code is executed.

In TF 1.x, to enable it, you have to run the following code:

tf.enable_eager_execution()

In TF 2.0, by default eager mode will be enabled. Also, the packages you are working on has been moved to TensorFlow Addons in TF 2.0.

Prasad
  • 5,946
  • 3
  • 30
  • 36