I am faced with this problem:
I have to build an FFNN that has to approximate an unknown function f:R^2 -> R^2
. The data in my possession to check the net is a one-dimensional R vector. I know the function g:R^2->R
that will map the output of the net into the space of my data. So I would use the neural network as a filter against bias in the data. But I am faced with two problems:
Firstly, how can I train my network in this way?
Secondly, I am thinking about adding an extra hidden layer that maps R^2->R
and lets the net train itself to find the correct maps and then remove the extra layer. Would this algorithm be correct? Namely, would the output be the same that I was looking for?