1

I am following this architecture below from the paper

enter image description here

Now, I need to concat the layers of others. I Tried to concat the layer Conv_10 with Deconv_1, but I got the error regarding the sizes of tensors. So I need to transform the Conv_10 layer from (3,44,44) to (3,34,34). How can I do this?

The current implementation of this network is available here: https://gist.github.com/brunojus/1a99b9d306b5b2f6853964fc972ebac3

Actual error: ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 34, 34, 3), (None, 44, 44, 3)]

2 Answers2

1

You can use keras.layers.Reshape(target_shape) to reshape a layer output, but criteria is that the total number of elements in the target shape after reshape must be equal to the the total number of elements in the input shape.

But your target shape dimensions (None, 34, 34, 3) don't allow to hold all the data from the input dimensions(None, 44, 44, 3). You can however lose the information from (None, 44, 44, 3) to reshape it to (None, 34, 34, 3), but this is not the ideal way as the spatial information is lost.

However, you can do a zero padding to resize the smaller output shape of a layer .i.e. on (None, 34, 34, 3) to match it to (None, 44, 44, 3) and later concatenate. ZeroPadding2D can add rows and columns of zeros at the top, bottom, left and right side of an image tensor.

Example: I have used Conv2D layers having same shape as you mentioned in the question .i.e. [(None, 34, 34, 3), (None, 44, 44, 3)]

from keras.models import Model
from keras.layers import Input, concatenate, Conv2D, ZeroPadding2D
from keras.optimizers import Adagrad
import tensorflow.keras.backend as K

input_img1 = Input(shape=(44,44,3))
x1 = Conv2D(3, (3, 3), activation='relu', padding='same')(input_img1)

input_img2 = Input(shape=(34,34,3))
x2 = Conv2D(3, (3, 3), activation='relu', padding='same')(input_img2)
# Zero Padding of 5 at the top, bottom, left and right side of an image tensor
x3 = ZeroPadding2D(padding = (5,5))(x2)

# Concatenate works as layers have same size output
x4 = concatenate([x1,x3])

output = Dense(18, activation='relu')(x4)

model = Model(inputs=[input_img1,input_img2], outputs=output)

model.summary()

Output -

Model: "model_19"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_85 (InputLayer)           (None, 34, 34, 3)    0                                            
__________________________________________________________________________________________________
input_84 (InputLayer)           (None, 44, 44, 3)    0                                            
__________________________________________________________________________________________________
conv2d_67 (Conv2D)              (None, 34, 34, 3)    84          input_85[0][0]                   
__________________________________________________________________________________________________
conv2d_66 (Conv2D)              (None, 44, 44, 3)    84          input_84[0][0]                   
__________________________________________________________________________________________________
zero_padding2d_11 (ZeroPadding2 (None, 44, 44, 3)    0           conv2d_67[0][0]                  
__________________________________________________________________________________________________
concatenate_27 (Concatenate)    (None, 44, 44, 6)    0           conv2d_66[0][0]                  
                                                                 zero_padding2d_11[0][0]          
__________________________________________________________________________________________________
dense_44 (Dense)                (None, 44, 44, 18)   126         concatenate_27[0][0]             
==================================================================================================
Total params: 294
Trainable params: 294
Non-trainable params: 0
__________________________________________________________________________________________________
1

The keras.layers.Reshape(target_shape) allows only to reshape the feature map(or matrix). For example: it can reshape the array of shape (3,44,44) to say, (3, 22,88) [ As, 44x44x3 = 5808; and 22x88x3=5808 too, it is possible to reshape the vector as long as the the total size is same]

What you are trying to do here is Resize and there is no Resize layer provided by Keras. This can be achieved by implementing a resizing/slicing function with Keras Lamda layer.

s_mehrotra
  • 137
  • 6