Let's say we have a fully connected network with 1 hidden layer. Let's call the input to the network X
. Suppose now that there is a variable Z
on which the input depends, i.e. X = f(Z,D)
where D
is the training data available. After that, X
is then fed to the network, e.g. the output will be Y=f(X,W)
, where W and b are indeed the network weights and biases.
In other words, the input of the network depends both on the training data and on a variable. Now, when writing the loss function in terms of X
, clearly the optimization will also depend on the value of Z
, therefore the network will learn that variable too.
Does this make sense? Is this kind of model still a Neural Network in the general sense?
P.S.: Z
is a trainable variable for the model. The network runs (in tensorflow) and the variable is actually being learnt, my wonder is more on the architecture level/mathematical details of such a model.