I am working on a regression problem where the predicted value must be a positive integer. One approach could be to just train a model, make predictions, and round the predicted values. However, I want to try a different approach of modifying the loss function. I tried this in Keras like so:
def my_custom_loss_fn(y_actual, y_predicted):
y_predicted_rounded = K.round(y_predicted)
custom_loss_value = K.sqrt(tf.keras.losses.mean_squared_error(y_actual, y_predicted_rounded))
return custom_loss_value
which throws an error: No gradients provided for any variable: ...This issue is likely because there are no gradients for K.round function.
My question is: Is there any other elegant way or even a different framework (like xgboost etc) where I could modify the lost function, such that the loss is squared root mean error of y_actual and y_predicted that has been rounded.