0

I have a U-Net model written in tensorflow for a segmentation problem. I want to improve my segmentation with the same amount of training data and I was thinking of adding a level set method module to the output and then calculate the loss. Something like this https://arxiv.org/pdf/1705.06260.pdf

But I don't know how to modify the output of the last layer in tensorflow

def amodel(pretrained_weights=None,
           input_size=(512, 512, 1),
           act="relu"):
    inputs = tf.keras.layers.Input(input_size)
    conv1 = tf.keras.layers.Conv2D(1, 1, activation='sigmoid')(inputs)
    model = tf.keras.Model(inputs=inputs, outputs=conv1)
    
    # model.compile(optimizer = Adam(lr = 1e-4), 
    loss = 'binary_crossentropy', metrics = ['accuracy'])
    model.compile(optimizer=tf.keras.optimizers.Adam(lr_scheduler),
                  loss=combo_loss(alpha=0, beta=0.4),
                  metrics=[dice_accuracy])

How do you apply a transformation to conv1 before forwarding to tf.keras.Model?

Thanks you

Christoph Rackwitz
  • 11,317
  • 4
  • 27
  • 36

1 Answers1

0

Seems like you want to use a lambda layer:

https://keras.io/api/layers/core_layers/lambda/

After you create the new layer you just need to pass him the conv1 layer as input, something like this:

inputs = tf.keras.layers.Input(input_size)
conv1 = tf.keras.layers.Conv2D(1, 1, activation='sigmoid')(inputs)
lambda_layer = Lambda(normalizer)(norm_concat)
model = tf.keras.Model(inputs=inputs, outputs=lambda_layer)

You just need to define the function that the Lambda layer has to call:

for example:

def normalizer(x):
  a = x[:, :, :, :, 1]
  b = x[:, :, :, :, 2]
  asum = tf.keras.backend.sum(a)
  bsum = tf.keras.backend.sum(b)
  ratio = tf.math.divide(asum, bsum)
  ratio = tf.cast(ratio, dtype=tf.float32) 
  return tf.multiply(b, ratio)

where x is the output of conv1