0

Suppose that I break my input into I two equal sized pieces I1, I2 and I want the following structure on my keras network -- I1->A1, I2->A2, then [A1,A2]->B where B is an output node. I can do this using groups as in 1. However, I want to require that the connection weights (and other activation parameters) for I1->A1 are they same as those for I2->A2, ie I want a symmetry between the 1's and the 2's. (Note that I don't require symmetry for [A1,A2]->B.)

ericf
  • 250
  • 2
  • 9

1 Answers1

1

If I understand your problem correctly, the mapping of input_1 to A_1 & input_2 to A_2 has been done one after one, since you want the mapping function being the same for both input. In this case, you might consider RNN, but if your inputs are independent to each other, you might consider using TimeDistributed wrapper in Keras. The below sample will take two inputs, and use Dense layer to map the inputs one by one, thus the weights of Dense is shared:

from keras.models import Model
from keras.layers import Input, Dense, TimeDistributed, Concatenate, Lambda

x_dim = 5
hidden_dim = 8

x1 = Input(shape=(1,x_dim,))
x2 = Input(shape=(1,x_dim,))

concat = Concatenate(axis=1)([x1, x2])
hidden_concat = TimeDistributed(Dense(hidden_dim))(concat)
hidden1 = Lambda(lambda x: x[:,:1,:])(hidden_concat)
hidden2 = Lambda(lambda x: x[:,1:,:])(hidden_concat)

model = Model(inputs=[x1,x2], outputs=[hidden1, hidden2])
model.summary()

>>>
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_33 (InputLayer)           (None, 1, 5)         0                                            
__________________________________________________________________________________________________
input_34 (InputLayer)           (None, 1, 5)         0                                            
__________________________________________________________________________________________________
concatenate_17 (Concatenate)    (None, 2, 5)         0           input_33[0][0]                   
                                                                 input_34[0][0]                   
__________________________________________________________________________________________________
time_distributed_9 (TimeDistrib (None, 2, 8)         48          concatenate_17[0][0]             
__________________________________________________________________________________________________
lambda_8 (Lambda)               (None, 1, 8)         0           time_distributed_9[0][0]         
__________________________________________________________________________________________________
lambda_9 (Lambda)               (None, 1, 8)         0           time_distributed_9[0][0]         
==================================================================================================
Total params: 48
Trainable params: 48
Non-trainable params: 0
meowongac
  • 702
  • 3
  • 12
  • Thanks! that's exactly what I was looking for. (RNNs aren't appropriate for my problem,) – ericf Aug 23 '19 at 23:21