1

So I am trying to decipher some code from the here. Below I have copy and pasted the relevant code that I don't really understand.

def layer(op):
    '''Decorator for composable network layers.'''

    def layer_decorated(self, *args, **kwargs):
        # Automatically set a name if not provided.
        name = kwargs.setdefault('name', self.get_unique_name(op.__name__))
        # Figure out the layer inputs.
        if len(self.terminals) == 0:
            raise RuntimeError('No input variables found for layer %s.' % name)
        elif len(self.terminals) == 1:
            layer_input = self.terminals[0]
        else:
            layer_input = list(self.terminals)
        # Perform the operation and get the output.
        layer_output = op(self, layer_input, *args, **kwargs)
        # Add to layer LUT.
        self.layers[name] = layer_output
        # This output is now the input for the next layer.
        self.feed(layer_output)
        # Return self for chained calls.
        return self

    return layer_decorated

class Network(object):

    def __init__(self, inputs, trainable=True):
        # The input nodes for this network
        self.inputs = inputs
        print(self.inputs)
        # The current list of terminal nodes
        self.terminals = []
        # Mapping from layer names to layers
        self.layers = dict(inputs)
        print(self.layers)
        # If true, the resulting variables are set as trainable
        self.trainable = trainable

    …

    def feed(self, *args):
        '''Set the input(s) for the next operation by replacing the terminal nodes.
        The arguments can be either layer names or the actual layers.
        '''
        assert len(args) != 0
        self.terminals = []
        for fed_layer in args:
            if isinstance(fed_layer, string_types):
                try:
                    fed_layer = self.layers[fed_layer]
                except KeyError:
                    raise KeyError('Unknown layer name fed: %s' % fed_layer)
            self.terminals.append(fed_layer)
        return self

 ....

 # equivalent to max_pool = layer(max_pool)
    @layer
    def max_pool(self, inp, k_h, k_w, s_h, s_w, name, padding='SAME'):
        self.validate_padding(padding)
        return tf.nn.max_pool(inp,
                              ksize=[1, k_h, k_w, 1],
                              strides=[1, s_h, s_w, 1],
                              padding=padding,
                              name=name)

I understand the above code, although I've having a bit of trouble trying to understand the code below:

class PNet(Network):
    def setup(self):
        (self.feed('data') 
             .conv(3, 3, 10, 1, 1, padding='VALID', relu=False, name='conv1')
             .prelu(name='PReLU1')
             .max_pool(2, 2, 2, 2, name='pool1')
             .conv(3, 3, 16, 1, 1, padding='VALID', relu=False, name='conv2')
             .prelu(name='PReLU2')
             .conv(3, 3, 32, 1, 1, padding='VALID', relu=False, name='conv3')
             .prelu(name='PReLU3')
             .conv(1, 1, 2, 1, 1, relu=False, name='conv4-1')
             .softmax(3,name='prob1'))

        (self.feed('PReLU3') #pylint: disable=no-value-for-parameter
             .conv(1, 1, 4, 1, 1, relu=False, name='conv4-2'))

In particular, I'm confused in the this part of the code works:

self.feed('data') 
             .conv(3, 3, 10, 1, 1, padding='VALID', relu=False, name='conv1')
             .prelu(name='PReLU1')
             .max_pool(2, 2, 2, 2, name='pool1')
             .conv(3, 3, 16, 1, 1, padding='VALID', relu=False, name='conv2')
             .prelu(name='PReLU2')
             .conv(3, 3, 32, 1, 1, padding='VALID', relu=False, name='conv3')
             .prelu(name='PReLU3')
             .conv(1, 1, 2, 1, 1, relu=False, name='conv4-1')
             .softmax(3,name='prob1'))

It can also be written as: self.feed('data').conv(3, 3, 10, 1, 1, padding='VALID', relu=False, name='conv1').prelu(name='PReLU1')...

And this is the what I don't understand, feed itself is a method of the Network class, but how am I able to access feed's methods etc.

YellowPillow
  • 4,100
  • 6
  • 31
  • 57
  • This has nothing to do with decorators. There is a `return self` in `feed()`, same for the other methods presumably. `self` is just another reference to the current instance, so the next method in the chain is called on `self` again, so you can call another method, etc. The decorator, too, returns `self`. – Martijn Pieters Mar 03 '17 at 09:56

2 Answers2

1

This has nothing to do with decorators.

The feed method - as well as, presumably, the conv and prelu methods - return self. This means you can continue to call methods on the result of calling that method.

This is known as "method chaining"; it's more common in languages like Ruby, but you can do it in Python too.

Daniel Roseman
  • 588,541
  • 66
  • 880
  • 895
0

It's pretty simple really.

So when you are starting this chain you have object self on which you call it's method feed() method executes and if you look into source code it returns self. But it's the modified self. So at this point feed() is 'consumed' and you are left with something like (modified)self.prelu()..... And this repeats with another method. And it repeats until there are no calls left.

Tomasz Plaskota
  • 1,329
  • 12
  • 23