0

I am struggling to add an additional constraint into my loss function (Keras, tensorflow)

My original loss function is:

  self.__loss_fn = tf.reduce_mean( 
            tf.square( self.__psiNNy 
                      - tf.matmul(self.__psiNNx, self.__K) ) 

The additional constraint is related to impose unitarity (K.T K=1). So, my new loss function looks like

  self.__loss_fn = tf.reduce_mean( 
            tf.square( self.__psiNNy 
                      - tf.matmul(self.__psiNNx, self.__K) ) )
              + tf.multiply(alpha, tf.matmul(tf.transpose(self.__K),self.__K)-1))

where alpha stands for a penalty coefficient.

Running the code, instead of providing a singular value for the loss. It gives an array:

Epoch -  0  Loss -  [[-0.3633499  -1.2530719  -1.29390422 ... -0.90075779 -0.81838405
  -0.94197399]
 [-1.2530719  14.31707269 14.78048348 ... -5.04269215 -5.24336678
  -0.27613182]
 [-1.29390422 14.78048348 15.89136624 ... -5.83845412 -6.28395005
  -0.08354599]
 ...
 [-0.90075779 -5.04269215 -5.83845412 ...  1.25852317  0.25653466
  -0.60421091]
 [-0.81838405 -5.24336678 -6.28395005 ...  0.25653466  5.08378911
  -4.45022781]
 [-0.94197399 -0.27613182 -0.08354599 ... -0.60421091 -4.45022781
   2.03832155]]  LR -  0.0001 Time -  1.472019910812378

I hope that you can help

Carlos
  • 15
  • 4

1 Answers1

0

Unitary matrix should multiply to identity matrix, not to one. Also, you are in a matrix space so you need a norm to denote "distance to the identity matrix".

In other words

loss(K) = 1/N || K'K - I ||^2
tf.reduce_mean(tf.square( tf.matmul(tf.transpose(self.__K),self.__K) - tf.eye(N) ))

where N is the size of K.

lejlot
  • 64,777
  • 8
  • 131
  • 164