0

I am currently working on a change detection project for my university course and I was stuck at writing a custom loss function.I know i have to use function closure to be able to use data from layers of the model but i don't know enough tensorflow/keras knowledge to write effecient code. The loss function equation

This is the modified cross entropy loss equation that i'm trying to turn into code.The loss needs the matrix W which I have to calculate using the inputs to the model, that is X1 and X2. So at the moment I have this.

def cmg_loss(X1,X2):
    def loss(y_true,y_pred):
        print(X1)
        if X1.shape[0] == None:
            X1 = tf.reshape(X1,(224,224,3))
            X2 = tf.reshape(X2,(224,224,3))
            cmm = [get_cmm(X1,X2)]
        else:
            cmm = [get_cmm(X1[i],X2[i]) for i in range(X1.shape[0])]
            
        N = tf.convert_to_tensor(y_true.shape[0],dtype=tf.float32)
        N_val = y_true.shape[0]
        loss = tf.convert_to_tensor(0.0)
        if(N_val == None):
            loss = get_cmgloss(y_true[0],y_pred[0],cmm[0])
            loss = tf.math.multiply(-1.0,loss)
            return tf.math.divide(loss,N)
        
        else:
            for i in range(N_val):
                print(i)
                print("CMM len ", len(cmm))
                x = get_cmgloss(y_true[i],y_pred[i],cmm[i])
                loss = tf.math.add(loss,get_cmgloss(y_true[i],y_pred[i],cmm[i]))
                loss = tf.math.multiply(-1.0,loss)
        return tf.math.divide(loss,N)
    return loss
    
def get_cmgloss(y_true,y_pred,W):
    y_true = tf.cast(y_true,dtype=tf.float32)
    y_pred = tf.cast(y_pred, dtype=tf.float32)
    betaminus = findbetaminus(y_pred)
    betaplus = 1 - betaminus
    betaminus = betaminus.astype('float32')
    betaplus = betaplus.astype('float32')
    loss = tf.convert_to_tensor(0.0)
    N = tf.convert_to_tensor(y_true.shape[0] * y_true.shape[0],dtype=tf.float32)
    betaminus_matrix = tf.fill((224,224), betaminus)
    betaplus_matrix = tf.fill((224,224), betaplus)
    one_matrix = tf.fill((224,224), 1.0)

    first_term = tf.math.multiply(betaminus_matrix,tf.math.multiply(y_true,tf.math.log(y_pred)))
    second_term = tf.math.multiply(betaplus_matrix,tf.math.multiply(tf.math.subtract(one_matrix,y_true), tf.math.log(tf.math.subtract(one_matrix,y_pred))))
    sum_first_second = tf.math.add(first_term, second_term)
    prod = tf.math.multiply(W,sum_first_second)
        
    loss = tf.math.reduce_sum(prod)
    #loss = K.sum(K.sum(betaminus_matrix * y_true * tf.math.log(y_pred),betaplus_matrix * (1 - y_true) * tf.math.log(1 - y_pred)))
    loss = tf.math.multiply(-1.0, loss)
    return tf.math.divide(loss,N)
    
def findbetaminus(gt):
    count_1 = tf.math.count_nonzero(gt == 1)
    size = gt.shape[0] * gt.shape[1]
    return count_1 / size;

def get_cmm(x1,x2):
    b1_diff_sq = tf.math.squared_difference(x1[:,:,0],x2[:,:,0])
    b2_diff_sq = tf.math.squared_difference(x1[:,:,1],x2[:,:,1])
    b3_diff_sq = tf.math.squared_difference(x1[:,:,2],x2[:,:,2])
    sum_3bands = tf.math.add(tf.math.add(b1_diff_sq,b2_diff_sq),b3_diff_sq)
    cmm = tf.math.sqrt(sum_3bands)
    #print(cmm)
    
    max_val = tf.reduce_max(cmm)
    #print("MAX_VAL ", max_val)
    max_val_matrix = tf.fill((224,224), max_val)
    cmm_bar = tf.divide(cmm,max_val_matrix)
    #print(cmm_bar)
    
    mean_cmm_bar = tf.reduce_mean(cmm_bar)
    #print("MEAN_VAL ", mean_cmm_bar)
    mean_cmm_bar_matrix = tf.fill((224,224), mean_cmm_bar)
    #print(mean_cmm_bar_matrix)
    condition = tf.math.greater(mean_cmm_bar_matrix, cmm_bar)
    return tf.where(condition, mean_cmm_bar_matrix, cmm_bar)
    #print(weights)

It would be great help if you could guide me on how to develop a loss function that makes use of data from other layers and also call multiple functions in its computation.

1 Answers1

0

If you want to use more advanced loss functions, you will have to use tf.GradientTape to do train steps by yourself instead of using the fit method. You can find many examples on the web and in the tensorflow documentation. This requires a little more work but it is much more powerful because you can essentially output a list of tensors out of your custom Model in the call method and then calculate the target losses and choose which parameters are changed.

elbe
  • 1,363
  • 1
  • 9
  • 13