29

This is the two methods for creating a keras model, but the output shapes of the summary results of the two methods are different. Obviously, the former prints more information and makes it easier to check the correctness of the network.

import tensorflow as tf
from tensorflow.keras import Input, layers, Model

class subclass(Model):
    def __init__(self):
        super(subclass, self).__init__()
        self.conv = layers.Conv2D(28, 3, strides=1)

    def call(self, x):
        return self.conv(x)


def func_api():
    x = Input(shape=(24, 24, 3))
    y = layers.Conv2D(28, 3, strides=1)(x)
    return Model(inputs=[x], outputs=[y])

if __name__ == '__main__':
    func = func_api()
    func.summary()

    sub = subclass()
    sub.build(input_shape=(None, 24, 24, 3))
    sub.summary()

output:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 24, 24, 3)         0         
_________________________________________________________________
conv2d (Conv2D)              (None, 22, 22, 28)        784       
=================================================================
Total params: 784
Trainable params: 784
Non-trainable params: 0
_________________________________________________________________
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            multiple                  784       
=================================================================
Total params: 784
Trainable params: 784
Non-trainable params: 0
_________________________________________________________________

So, how should I use the subclass method to get the output shape at the summary()?

Gary
  • 823
  • 1
  • 8
  • 14

8 Answers8

32

I have used this method to solve this problem, I don't know if there is an easier way.

class subclass(Model):
    def __init__(self):
        ...
    def call(self, x):
        ...

    def model(self):
        x = Input(shape=(24, 24, 3))
        return Model(inputs=[x], outputs=self.call(x))



if __name__ == '__main__':
    sub = subclass()
    sub.model().summary()
Elazar
  • 20,415
  • 4
  • 46
  • 67
Gary
  • 823
  • 1
  • 8
  • 14
  • 2
    Can you explain why this works? Especially the `outputs=self.call(x)` part. – Gilfoyle Sep 07 '20 at 17:17
  • 4
    @Samuel By evaluating `outputs=self.call(x)`, the `subclass.call(self, x)` method is invoked. This triggers shape computation in the encapsulating instance. Furthermore, the returned instance of `Model` also computes its own shape which is reported in `.summary()`. The primary problem with this approach is that the input shape is constant `shape=(24, 24, 3)`, so if you need a dynamic solution, this won't work. – Rob Hall Nov 20 '20 at 12:13
  • Can you explain what goes in the ``...`` . Is this a general solution or do you need model-specific stuff in those calls? – GuySoft Jan 26 '21 at 14:31
  • 1
    @GuySoft ... in __init__ instantiates your layers while ... in call connects the different layers building a network. It's generic for all subclassed keras models. – DeWil Jun 01 '21 at 18:01
  • Would it not be better/easier then to just overwrite the `summary` function? I.e., `def summary(self):` `x = ...` `dummy_model = Model(...` `dummy_model.summary`. You could even have the shape as an input variable of the new summary function. – Lu Kas Oct 05 '22 at 15:30
  • Last line of code in my comment should have been `dummy_model.summary()` – Lu Kas Oct 05 '22 at 15:44
9

The way I solve the problem is very similar to what Elazar mensioned. Override the function summary() in the class subclass. Then you can directly call summary() while using model subclassing:

class subclass(Model):
    def __init__(self):
        ...
    def call(self, x):
        ...

    def summary(self):
        x = Input(shape=(24, 24, 3))
        model = Model(inputs=[x], outputs=self.call(x))
        return model.summary()

if __name__ == '__main__':
    sub = subclass()
    sub.summary()
jhihan
  • 499
  • 4
  • 8
  • Are there any advantage over the Elazar's solution? I like your approach since it is more succinct. – MOON Apr 15 '22 at 12:26
  • 1
    @MOON I believe they are just same, while this one is neater. It is not any tf tricks, just some basic OOP technique in python. – LIU Qingyuan Aug 18 '22 at 11:41
7

I guess that key point is the _init_graph_network method in the class Network, which is the parent class of Model. _init_graph_network will be called if you specify the inputs and outputs arguments when calling __init__ method.

So there will be two possible methods:

  1. Manually calling the _init_graph_network method to build the graph of the model.
  2. Reinitialize with the input layer and output.

and both methods need the input layer and output (required from self.call).

Now calling summary will give the exact output shape. However it would show the Input layer, which isn't a part of subclassing Model.

from tensorflow import keras
from tensorflow.keras import layers as klayers

class MLP(keras.Model):
    def __init__(self, input_shape=(32), **kwargs):
        super(MLP, self).__init__(**kwargs)
        # Add input layer
        self.input_layer = klayers.Input(input_shape)

        self.dense_1 = klayers.Dense(64, activation='relu')
        self.dense_2 = klayers.Dense(10)

        # Get output layer with `call` method
        self.out = self.call(self.input_layer)

        # Reinitial
        super(MLP, self).__init__(
            inputs=self.input_layer,
            outputs=self.out,
            **kwargs)

    def build(self):
        # Initialize the graph
        self._is_graph_network = True
        self._init_graph_network(
            inputs=self.input_layer,
            outputs=self.out
        )

    def call(self, inputs):
        x = self.dense_1(inputs)
        return self.dense_2(x)

if __name__ == '__main__':
    mlp = MLP(16)
    mlp.summary()

The output will be:

Model: "mlp_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, 16)]              0         
_________________________________________________________________
dense (Dense)                (None, 64)                1088      
_________________________________________________________________
dense_1 (Dense)              (None, 10)                650       
=================================================================
Total params: 1,738
Trainable params: 1,738
Non-trainable params: 0
_________________________________________________________________
hankchen1728
  • 79
  • 1
  • 2
4

I analyzed the answer of Adi Shumely:

  • Adding an Input_shape should not be needed since you set it in the build() as a parameter
  • Adding an Input layer does nothing to the model and it is brought as a parameter to the call() method
  • Adding the so-called output is not the way I see it. The only, and most important, thing that it does is calling the call() method.

So I bring it up and come up with this solution that does not need any modification in the model and just needs to improve the model as it is built before the call to the summary() method by adding a call to the call() method of the model with an Input tensor. I tried on my own model and on the three models presented in this feed and it works so far.

From the first post of this feed:

import tensorflow as tf
from tensorflow.keras import Input, layers, Model

class subclass(Model):
    def __init__(self):
        super(subclass, self).__init__()
        self.conv = layers.Conv2D(28, 3, strides=1)

    def call(self, x):
        return self.conv(x)

if __name__ == '__main__':
    sub = subclass()
    sub.build(input_shape=(None, 24, 24, 3))

    # Adding this call to the call() method solves it all
    sub.call(Input(shape=(24, 24, 3)))

    # And the summary() outputs all the information
    sub.summary()

From the second post of the feed

from tensorflow import keras
from tensorflow.keras import layers as klayers

class MLP(keras.Model):
    def __init__(self, **kwargs):
        super(MLP, self).__init__(**kwargs)
        self.dense_1 = klayers.Dense(64, activation='relu')
        self.dense_2 = klayers.Dense(10)

    def call(self, inputs):
        x = self.dense_1(inputs)
        return self.dense_2(x)

if __name__ == '__main__':
    mlp = MLP()
    mlp.build(input_shape=(None, 16))
    mlp.call(klayers.Input(shape=(16)))
    mlp.summary()

As from the last post of the feed

import tensorflow as tf
class MyModel(tf.keras.Model):
    def __init__(self, **kwargs):
        super(MyModel, self).__init__(**kwargs) 
        self.dense10 = tf.keras.layers.Dense(10, activation=tf.keras.activations.softmax)    
        self.dense20 = tf.keras.layers.Dense(20, activation=tf.keras.activations.softmax)
    
    def call(self, inputs):
        x =  self.dense10(inputs)
        y_pred =  self.dense20(x)
        return y_pred

model = MyModel()
model.build(input_shape = (None, 32, 32, 1))
model.call(tf.keras.layers.Input(shape = (32, 32, 1)))
model.summary()
aigookdo
  • 41
  • 2
2

had the same problem - fix it by 3 steps:

  1. add input_shape in the _ init _
  2. add a input_layer
  3. add out layer
class MyModel(tf.keras.Model):
    
    def __init__(self,input_shape=(32,32,1), **kwargs):
        super(MyModel, self).__init__(**kwargs) 
        self.input_layer = tf.keras.layers.Input(input_shape)
        self.dense10 = tf.keras.layers.Dense(10, activation=tf.keras.activations.softmax)    
        self.dense20 = tf.keras.layers.Dense(20, activation=tf.keras.activations.softmax)
        self.out = self.call(self.input_layer)    
    
    def call(self, inputs):
        x =  self.dense10(inputs)
        y_pred =  self.dense20(x)
     
        return y_pred

model = MyModel()
model(x_test[:99])
print('x_test[:99].shape:',x_test[:10].shape)
model.summary()

output:

x_test[:99].shape: (99, 32, 32, 1)
Model: "my_model_32"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_79 (Dense)             (None, 32, 32, 10)        20        
_________________________________________________________________
dense_80 (Dense)             (None, 32, 32, 20)        220       
=================================================================
Total params: 240
Trainable params: 240
Non-trainable params: 0

Adi Shumely
  • 367
  • 3
  • 5
1

I have used this method to solve this problem tested on tensorflow 2.1 and tensorflow 2.4.1. Declare the InputLayer with model.inputs_layer

class Logistic(tf.keras.models.Model):
    def __init__(self, hidden_size = 5, output_size=1, dynamic=False, **kwargs):
        '''
        name: String name of the model.
        dynamic: (Subclassed models only) Set this to `True` if your model should
            only be run eagerly, and should not be used to generate a static
            computation graph. This attribute is automatically set for Functional API
            models.
        trainable: Boolean, whether the model's variables should be trainable.
        dtype: (Subclassed models only) Default dtype of the model's weights (
            default of `None` means use the type of the first input). This attribute
            has no effect on Functional API models, which do not have weights of their
            own.
        '''
        super().__init__(dynamic=dynamic, **kwargs)
        self.inputs_ = tf.keras.Input(shape=(2,), name="hello")
        self._set_input_layer(self.inputs_)
        self.hidden_size = hidden_size
        self.dense = layers.Dense(hidden_size, name = "linear")
        self.outlayer = layers.Dense(output_size, 
                        activation = 'sigmoid', name = "out_layer")
        self.build()
        

    def _set_input_layer(self, inputs):
        """add inputLayer to model and display InputLayers in model.summary()

        Args:
            inputs ([dict]): the result from `tf.keras.Input`
        """
        if isinstance(inputs, dict):
            self.inputs_layer = {n: tf.keras.layers.InputLayer(input_tensor=i, name=n) 
                                    for n, i in inputs.items()}
        elif isinstance(inputs, (list, tuple)):
            self.inputs_layer = [tf.keras.layers.InputLayer(input_tensor=i, name=i.name) 
                                    for i in inputs]
        elif tf.is_tensor(inputs):
            self.inputs_layer = tf.keras.layers.InputLayer(input_tensor=inputs, name=inputs.name)
    
    def build(self):
        super(Logistic, self).build(self.inputs_.shape if tf.is_tensor(self.inputs_) else self.inputs_)
        _ = self.call(self.inputs_)
    

    def call(self, X):
        X = self.dense(X)
        Y = self.outlayer(X)
        return Y

model = Logistic()
model.summary()
Model: "logistic"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
hello:0 (InputLayer)         [(None, 2)]               0         
_________________________________________________________________
linear (Dense)               (None, 5)                 15        
_________________________________________________________________
out_layer (Dense)            (None, 1)                 6         
=================================================================
Total params: 21
Trainable params: 21
Non-trainable params: 0
_________________________________________________________________
1

I added only one line(below) in your code.

self.call(Input(shape=(24, 24, 3)))

my code is

import tensorflow as tf
from tensorflow.keras import Input, layers, Model

class subclass(Model):
    def __init__(self):
        super(subclass, self).__init__()
        self.conv = layers.Conv2D(28, 3, strides=1)
    
        # add this code
        self.call(Input(shape=(24, 24, 3)))

    def call(self, x):
        return self.conv(x)


def func_api():
    x = Input(shape=(24, 24, 3))
    y = layers.Conv2D(28, 3, strides=1)(x)
    return Model(inputs=[x], outputs=[y])

if __name__ == '__main__':
    func = func_api()
    func.summary()

    sub = subclass()
    sub.build(input_shape=(None, 24, 24, 3))
    sub.summary()

result

Model: "model"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
input_1 (InputLayer)         [(None, 24, 24, 3)]       0
_________________________________________________________________
conv2d (Conv2D)              (None, 22, 22, 28)        784
=================================================================
Total params: 784
Trainable params: 784
Non-trainable params: 0
_________________________________________________________________
Model: "subclass"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
conv2d_1 (Conv2D)            (None, 22, 22, 28)        784
=================================================================
Total params: 784
Trainable params: 784
Non-trainable params: 0                
_______________________________________________________________
0

Gary's answer works. However, for even more convenience, I wanted to access the summary method of keras.Model transparently from my custom class objects.

This can be done easily by implementing the builtin __getattr__ method (more info can be found in official Python doc) as follows:

from tensorflow.keras import Input, layers, Model

class MyModel():
    def __init__(self):
        self.model = self.get_model()

    def get_model(self):
        # here we use the usual Keras functional API
        x = Input(shape=(24, 24, 3))
        y = layers.Conv2D(28, 3, strides=1)(x)
        return Model(inputs=[x], outputs=[y])

    def __getattr__(self, name):
        """
        This method enables to access an attribute/method of self.model.
        Thus, any method of keras.Model() can be used transparently from a MyModel object
        """
        return getattr(self.model, name)


if __name__ == '__main__':
    mymodel = MyModel()
    mymodel.summary()  # underlyingly calls MyModel.model.summary()
reexyyl
  • 41
  • 4