0

I have created, and trained, an autoencoder using Keras. After train this model I want to get only the encoder part, so i did some pop().

Later I created the Sequential() model, based on the remaining layers of my autoencoder model:

model_seq = Sequential(layers=autoencoder.layers)

To add the Flatten() layer, I did:

l_out = Flatten()(model_seq.output)
model_seq.layers.append(l_out)

In my mind this should be enought, so I called model_seq.summary() to check if everything is ok. But unfortunately I've got this error:

    model_seq.summary()
    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    input_1 (InputLayer)         (None, 256, 256, 1)       0         
    _________________________________________________________________
    conv2d_1 (Conv2D)            (None, 256, 256, 32)      320       
    _________________________________________________________________
    max_pooling2d_1 (MaxPooling2 (None, 128, 128, 32)      0         
    _________________________________________________________________
    conv2d_2 (Conv2D)            (None, 128, 128, 64)      18496     
    _________________________________________________________________
    max_pooling2d_2 (MaxPooling2 (None, 64, 64, 64)        0         
    _________________________________________________________________
    conv2d_3 (Conv2D)            (None, 64, 64, 128)       73856     
    _________________________________________________________________
    Traceback (most recent call last):

      File "<ipython-input-49-cb26bbc86f4b>", line 1, in <module>
        model_seq.summary()

      File "C:\Users\helde\Miniconda3\lib\site-packages\keras\engine\topology.py", line 2740, in summary
        print_fn=print_fn)

      File "C:\Users\helde\Miniconda3\lib\site-packages\keras\utils\layer_utils.py", line 150, in print_summary
        print_layer_summary(layers[i])

      File "C:\Users\helde\Miniconda3\lib\site-packages\keras\utils\layer_utils.py", line 110, in print_layer_summary
        fields = [name + ' (' + cls_name + ')', output_shape, layer.count_params()]

    AttributeError: 'Tensor' object has no attribute 'count_params'

The part where summary() raises the error is exactly where the Flatten layer should be.

Did I miss something?

benjaminplanche
  • 14,689
  • 5
  • 57
  • 69
Helder
  • 482
  • 5
  • 18
  • 2
    You are not using the right approach by mixing the functional and sequential APIs. You just need to only use the functional API and get the encoder by making a model with the encoder layers. – Dr. Snoopy Jun 09 '18 at 15:12

1 Answers1

3

It seems to me like you are mixing Sequential and Functional APIs. What about model_seq.add(Flatten())?

benjaminplanche
  • 14,689
  • 5
  • 57
  • 69
  • indeed `model_seq.add(Flatten())` worked! Is that any way to test if it is working? I generate an np array but didnt work. – Helder Jun 09 '18 at 16:17
  • What didn't work? (as a side note: I agree with @Matias_Valdenegro - going full `Functional` would probably make things easier) – benjaminplanche Jun 09 '18 at 16:29
  • Didnt work to test the whole network with the flatten in the last layer... I would like to enter in it a matrix and expect for a vector... because this part of encoder would be a step before pass the flattened vector to the classifier... – Helder Jun 09 '18 at 16:35
  • More information may be necessary to help you further. I'd suggest to open a new thread or update your original question, with your updated code and its trace (c.f. [mcve]). – benjaminplanche Jun 09 '18 at 16:41