0

I get the error AttributeError: 'NoneType' object has no attribute '_inbound_nodes' while trying to create a Keras model using Keras'

model = Model(inputs=input, outputs=out)

From my understanding of other questions here on Stackoverflow (eg: Q1, Q2, Q3, Q4) about the same error, the trick should be to connect input to out using only Keras layer objects, even if it means using Lambda. I am pretty sure that I did that.

My code is as follows:

from keras import backend as K
import keras
from keras.layers import Layer, Activation, Conv1D, Lambda, Concatenate, Add
from keras.layers.normalization import BatchNormalization

def create_resnet_model(input_shape, block_channels, repetitions, layer_class, batchnorm=False):
    input = keras.Input(shape=input_shape)

    x = K.identity(input)

    resdim = sum(block_channels[-1]) if hasattr(block_channels[-1], "__iter__") else block_channels[-1]

    def zero_pad_input(z):
         pad_shape = K.concatenate([K.shape(z)[:2], [1 + resdim - input_shape[-1]]])
         return K.concatenate([z, K.zeros(pad_shape)], axis=-1)

    def add_mask_dim(z):
        return K.concatenate([K.zeros_like(z[:, :, :1]), z], axis=-1)

    padded_input = Lambda(zero_pad_input)(input)

    def extract_features(z):
        return z[:, :, 1:]

    for block in range(repetitions):

        for args in block_channels:
            if not hasattr(args, "__iter__"):
                args = (args, )
            layer = layer_class(*args)
            y = layer(x)
            y_f = Lambda(extract_features)(y)
            if batchnorm:
                bn = BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None)
                y_f = bn(y_f)
            y_f = Activation("relu")(y_f)
            y = Lambda(add_mask_dim)(y_f)
        if block == 0:
            x = Add()([y, padded_input])
        else:
            x = Add()([x, y])

    out = Conv1D(filters=1, kernel_size=1, activation="linear", padding="same")(x)

    model = keras.Model(inputs=input, outputs=out)

    return model

Where layer_class is a Keras layer module. So it seems to me that everything from the ìnput to out is transformed using Keras layers. Even for the additions I use Add.

patapouf_ai
  • 17,605
  • 13
  • 92
  • 132

1 Answers1

0

I found the problem.

x = K.identity(input)

is not a Keras layer!

Changing that line for

def identity(z):
    return z

x = Lambda(identity)(input)

solves the problem.

patapouf_ai
  • 17,605
  • 13
  • 92
  • 132