I want to use the keras layer Flatten()
or Reshape((-1,))
at the end of my model to output an 1D vector like [0,0,1,0,0, ... ,0,0,1,0]
.
Sadly there is an problem because of my unknown input shape which is:
input_shape=(4, None, 1)))
.
So typically the input shape is something between [batch_size, 4, 64, 1]
and [batch_size, 4, 256, 1]
the output should be batch_size x unknown dimension (for the fist example above: [batch_size, 64]
and for the secound [batch_size, 256]
).
My model looks like:
model = Sequential()
model.add(Convolution2D(32, (4, 32), padding='same', input_shape=(4, None, 1)))
model.add(BatchNormalization())
model.add(LeakyReLU())
model.add(Convolution2D(1, (1, 2), strides=(4, 1), padding='same'))
model.add(Activation('sigmoid'))
# model.add(Reshape((-1,))) produces the error
# int() argument must be a string, a bytes-like object or a number, not 'NoneType'
model.compile(loss='binary_crossentropy', optimizer='adadelta')
So that my current output shape is [batchsize, 1, unknown dimension, 1].
Which does not allow me to use class_weights for example "ValueError: class_weight not supported for 3+ dimensional targets."
.
Is it possible to use something like Flatten()
or Reshape((1,))
to flatt my 3 dimensional output in keras (2.0.4 with tensorflow backend) when I use a flexible input shape?
Thanks a lot!