A function approximation algorithm needs you to make a few assumptions about how your mathematical model behaves.
If you see things from a black box point of view, three scenarios can occur -
X -> MODEL -> Y
- You have the X and MODEL, but you dont have the Y; This is simulation
- You have the MODEL and Y, but you dont have the X; This is Optimization
- You have the X and Y, but you dont have the MODEL; This is mathematical modelling
However there is a catch. You can NEVER do 3. directly. Instead you use a trick to reframe the 3. as a 2. (optimization problem). The trick is to say that you assume your model to be something like y=mx+c, and then instead of finding the model you find new inputs m and c. Thus, we can instead say -
- You have the (MODEL, X) and Y but you dont have M,C (New inputs); This is optimization as well.
(M,C) -> (MODEL + X) -> Y
This means, that even if you dont know the input function, you have to assume some model and then estimate the parameters which when tuned, let the model behave as the close to the input function as possible.
Basically, what you need is machine learning. You have the inputs, you have the outputs (or you can get them but running your first function with a large sample of outputs), you have the cost function. Assume a model, and train it to approximate your input function.
If you are not sure what to use, then use a generalized function approximator AKA neural networks. But beware, it needs a lot more data to train.