1

I'm trying to adopt a Keras InfoGAN implementation to essentially extract the transfer values (or embeddings) for an image I feed the discriminator. With this, I want to perform a similarity search with the resulting vectors to find the n most similar images to the one provided in the dataset.

I want to use Keras, so I'm looking at this implementation as a reference:

I found this TensorFlow 0.11 implementation where they provide functionality to achieve the similarity goal, but I'm having trouble trying to accomplish something similar in Keras.

I guess more simply I want to understand which layer would be best to take the transfer values from in the discriminator, and how I can do that in Keras with a trained model. The discriminator layers:

    x = Convolution2D(64, (4, 4), strides=(2,2))(self.d_input)
    x = LeakyReLU(0.1)(x)
    x = Convolution2D(128, (4, 4), strides=(2,2))(x)
    x = LeakyReLU(0.1)(x)
    x = BatchNormalization()(x)
    x = Flatten()(x)
    x = Dense(1024)(x)
    x = LeakyReLU(0.1)(x)
    self.d_hidden = BatchNormalization()(x) # Store this to set up Q
    self.d_output = Dense(1, activation='sigmoid', name='d_output')(self.d_hidden)

    self.discriminator = Model(inputs=[self.d_input], outputs=[self.d_output], name='dis_model')
    self.opt_discriminator = Adam(lr=2e-4)
    self.discriminator.compile(loss='binary_crossentropy',
                               optimizer=self.opt_discriminator)
Marcin Możejko
  • 39,542
  • 10
  • 109
  • 120

0 Answers0