I have a trained neural network which suitably maps my inputs to my outputs. Is it then possible to specify a desired y output and then use a gradient decent method to determine the optimum input values to get that output?
When using backpropegation, the partial derivative of a weight is used with error function to proportionally adjust the weights; is there a way to do something similar with the input values themselves and a target y value?