-1

I want to implement Recurrent Neural network with GRU using Keras in python. I have problem in running code and I change variables more and more but it doesn't work. Do you have an idea for solve it?

inputs = 42          #number of columns input  
num_hidden =50      #number of neurons in the layer
outputs = 1           #number of columns output  
num_epochs = 50
batch_size = 1000
learning_rate = 0.05
#train       (125973, 42)  125973 Rows and 42 Features
#Labels  (125973,1) is True Results
model = tf.contrib.keras.models.Sequential()
fv=tf.contrib.keras.layers.GRU
model.add(fv(units=42, activation='tanh', input_shape= (1000,42),return_sequences=True))  #i want to send Batches to train


#model.add(tf.keras.layers.Dropout(0.15))  # Dropout overfitting

#model.add(fv((1,42),activation='tanh', return_sequences=True))
#model.add(Dropout(0.2))  # Dropout overfitting

model.add(fv(42, activation='tanh'))
model.add(tf.keras.layers.Dropout(0.15))  # Dropout overfitting

model.add(tf.keras.layers.Dense(1000,activation='softsign'))
#model.add(tf.keras.layers.Activation("softsign"))


start = time.time()
# sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
# model.compile(loss="mse", optimizer=sgd)
model.compile(loss="mse", optimizer="Adam") 
inp = np.array(train)
oup = np.array(labels)
X_tr = inp[:batch_size].reshape(-1, batch_size, inputs)
model.fit(X_tr,labels,epochs=20, batch_size=batch_size)

However I get the following error:

ValueError: Error when checking target: expected dense to have shape (1000,) but got array with shape (1,)
today
  • 32,602
  • 8
  • 95
  • 115
Mahdi.m
  • 1
  • 1
  • 1
  • could you tell me how can i change it ? – Mahdi.m Oct 26 '18 at 10:41
  • 1
    If one of the answers below resolved your issue, kindly *accept* it by clicking on the checkmark next to the answer to mark it as "answered" - see [What should I do when someone answers my question?](https://stackoverflow.com/help/someone-answers) – today Nov 15 '18 at 09:05

2 Answers2

0

Here, you have mentioned input vector shape to be 1000.

model.add(fv(units=42, activation='tanh', input_shape= (1000,42),return_sequences=True)) #i want to send Batches to train

However, shape of your training data (X_tr) is 1-D Check your X_tr variable and have same dimension for input layer.

SaiNageswar S
  • 1,203
  • 13
  • 22
0

If you read the error carefully you would realize there is a shape mismatch between the shapes of labels you provide, which is (None, 1), and the shape of output of model, which is (None, 1):

ValueError: Error when checking target:  <--- This means the output shapes
expected dense to have shape (1000,)     <--- output shape of model  
but got array with shape (1,)            <--- the shape of labels you give when training

Therefore you need to make them consistent. You just need to change the number of units in the last layer to 1 since there is one output per input sample:

model.add(tf.keras.layers.Dense(1, activation='softsign')) # 1 unit in the output
today
  • 32,602
  • 8
  • 95
  • 115
  • i change it but Error accure ValueError: Input arrays should have the same number of samples as target arrays. Found 1 input samples and 125973 target samples. – Mahdi.m Oct 26 '18 at 15:19
  • @Mahdi.m Make sure the input data (`X_tr`) has a shape of `(num_samples, num_timesteps, num_features)` and the shape of labels array (`labels`) is `(num_samples,)` or `(num_samples, 1)`. – today Oct 26 '18 at 15:22
  • X_tr.shape,label.shape ((1000, 1, 42), (1000, 1)) ValueError: Error when checking input: expected gru_4_input to have shape (1000, 42) but got array with shape (1, 42) – Mahdi.m Oct 26 '18 at 16:47
  • @Mahdi.m Change input shape of GRU layer to `(1, 42)`. – today Oct 26 '18 at 17:24
  • its Work!!!Thank you i have one question how can i send batches to train it?should i use loop for?could you tell me about it? – Mahdi.m Oct 26 '18 at 17:57
  • @Mahdi.m Actually the training is done in batches by default when you call `fit` method. You can specify the `batch_size` argument in the `fit` call to control the amount of input samples used in each batch. – today Oct 26 '18 at 19:21
  • its GOOD! thank you its running but loss variable is 99! why its happen? – Mahdi.m Oct 26 '18 at 20:30
  • i have another question .... i have 100000 data for train and batch of data 1000 and when i run it loss function is 100!! why it happen? – Mahdi.m Nov 02 '18 at 09:21