I was thinking the idea of using a perceptron-like neural network to solve my problem. I have a dataset that, for the sake of simplicity, looks like this:
id entryWoodLength entryWoodThickness cuttingToolPos1 cuttingToolPos2 exitWoodLength exitWoodThickness
1 5.5 1.6 2.1 2.2 4.2 1.6
2 5.7 1.5 2.2 2.6 4.2 1.5
3 6.5 1.8 2.6 2.7 4.3 1.6
4 5.9 1.7 2.4 2.9 4.2 1.5
5 5.8 1.5 2.2 2.6 4.1 1.5
And I had the thought of trying a fowardfeeding neural network where the input would the wood dimensions (entryWoodLenth and entryWoodThickness) and the output would be the position of the cutting tools (cuttingToolPos1 and cuttingToolPos2). We already know what the ideal dimension of the exit wood should be (4.2 for length and 1.5 for thickness, say). So we would technically want our network to optimize itself on the real values of the wood (exitWoodLength and exitWoodThickness). That means using the MSE of exitWoodLength and exitWoodThickness with the references values of 4.2 and 1.5, in something like that:
mean_squared_error(exitWoodLength, 4.2) + mean_squared_error(exitWoodThickness, 1.5)
However, Keras only allows custom loss functions that make use of the y_pred
and y_true
arguments, which in our case would be cuttingToolPos1 and cuttingToolPos2, not the values we want for the loss function. I was thinking of using a closure function and simply ignore the y_pred
and y_true
arguments, something in the sense of:
def custom_loss(exitWoodLength, exitWoodThickness):
def loss(y_pred, y_true):
mean_squared_error(exitWoodLength, 4.2) + mean_squared_error(exitWoodThickness, 1.5)
return loss
But I am worried about indexes and if it's even feasible at all.
Has anyone ever experienced something similar? Am I on a correct path or completely wrong of using neural networks at all?