5

I have a supervised learning problem that I am solving with the Keras functional API.

As this model is predicting the state of a physical system, I know the supervised model should follow additional constraints.

I would like to add that as an additional loss term that penalizes the model for making predictions that do not follow those constraints. Unfortunately, the number of training examples for the supervised learning problem >> the number of constraint examples.

Basically, I am trying to do this:

Model summary

Minimizing both the supervised learning error, and the constraint error as an auxiliary loss.

I do not believe that alternating training batches on each dataset will be successful, because the gradient will only capture the error of one problem at a time, when I really want the physical constraint to act as regularization on the supervised learning task. (If I am incorrect in my interpretation, please let me know).

I know this could be implemented in pure Tensorflow or Theano, but I am hesitant to leave the Keras ecosystem that makes everything else so convenient. If anybody knows how to train a model with batch sizes that vary across inputs, I'd really appreciate the help.

thorbjorn444
  • 236
  • 2
  • 5

0 Answers0