0

Based on this methodology, I was trying to build RNN model with categorical and continuous variable.

The continous placeholder is in this form:

x = tf.placeholder(tf.float32, [None,num_steps, input_size], name="input_x")`

And the categorical data placeholder is in this form:

store, v_store = len(np.unique(data_df.Store.values)), 50

z_store = tf.placeholder(tf.int32, [None, num_steps], name='Store')

emb_store = tf.Variable(
    tf.random_uniform((store, v_store), -r_range, r_range),
    name="store"
    )

embed_store = tf.nn.embedding_lookup(emb_store, z_store)

Finally, I'm concatenating categorical and continuous placeholder together.

inputs_with_embed = tf.concat([x, embed_store], axis=2, name="inputs_with_embed")

This is where I'm multiplying the tensor vector with last layer.

val = tf.transpose(val, [1, 0, 2])
last = tf.gather(val, int(val.get_shape()[0]) - 1, name="lstm_state")
ws = tf.Variable(tf.truncated_normal([lstm_size, input_size]), name="w")
bias = tf.Variable(tf.constant(0.1, shape=[input_size]), name="b")

Edit: All the tensorflow graph code ran fine. But when I was executing the session code, I was getting the following error:

InvalidArgumentError (see above for traceback): Incompatible shapes: [50,4] vs. [50,7,1]
   [[Node: sub = Sub[T=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"](add, _arg_input_y_0_4)]]

And my prediction part.

loss = tf.reduce_mean(tf.square(pred - y), name="loss_mse_train")

Edit end

Can someone please tell me where I'm making the mistake?

Thanks!

Beta
  • 1,638
  • 5
  • 33
  • 67
  • The shape of `pred` is `[50,4]`, but the shape of `y_pred` is `[50,7,1]`. If you want to give each time step a predictive value, you should change `ws` to `[lstm_size, 7]` and `bias` to `[7]`. – giser_yugang Mar 02 '19 at 06:48
  • @giser_yugang: Thanks for your comment. I've updated my code. You are right. My prediction is 50X4 dimension. But I'm following the same steps you are suggesting. Can you please help me a bit more. Thanks a lot! – Beta Mar 02 '19 at 06:54
  • Sure, you can post your current mistakes. – giser_yugang Mar 02 '19 at 06:59
  • I've already put the updated code. It's in **Edit** section. Thanks! – Beta Mar 02 '19 at 07:02
  • Also, all my codes, error and sample data is in the last link. You can refer to that as well. Thanks you! – Beta Mar 02 '19 at 07:12

1 Answers1

1

As I said, if you want to give each time step a predictive value, you should change ws to [lstm_size, 7] and bias to [7].

ws = tf.Variable(tf.truncated_normal([lstm_size, 7]), name="w")
bias = tf.Variable(tf.constant(0.1, shape=[7]), name="b")

# need to change shape when pred=(?,7) and y=(?,7,1) 
loss = tf.reduce_mean(tf.square(pred - tf.squeeze(y)), name="loss_mse_train")
giser_yugang
  • 6,058
  • 4
  • 21
  • 44
  • Perfect! Thanks a lot @giser_yugang! I had sleepless nights to solve this problem. And I got the answer from where I least expected. Thanks again :) – Beta Mar 02 '19 at 07:40