1

Let's say we have a fully connected network with 1 hidden layer. Let's call the input to the network X. Suppose now that there is a variable Z on which the input depends, i.e. X = f(Z,D) where D is the training data available. After that, X is then fed to the network, e.g. the output will be Y=f(X,W), where W and b are indeed the network weights and biases.

In other words, the input of the network depends both on the training data and on a variable. Now, when writing the loss function in terms of X, clearly the optimization will also depend on the value of Z, therefore the network will learn that variable too.

Does this make sense? Is this kind of model still a Neural Network in the general sense?

P.S.: Z is a trainable variable for the model. The network runs (in tensorflow) and the variable is actually being learnt, my wonder is more on the architecture level/mathematical details of such a model.

rob_med
  • 496
  • 3
  • 7
  • I think, you are not properly defining your equation. `Y = f(X, Z)` where `X` is input and `Z` is network parameters, `X` is defined on the domain `D`. `f` is the network structure; `Y` is the output. – Ishant Mrinal Aug 08 '17 at 08:15
  • That's the thing, this is before the data get fed into the normal hidden layers. So, `X = f(Z,D)` is the input that I then use feed to a fully-connected network, e.g. `Y=f(X,W)` where `W` and `b` are indeed the network weights and biases. – rob_med Aug 08 '17 at 08:22
  • you may include this information in your question, otherwise it's incomplete – Ishant Mrinal Aug 08 '17 at 08:24
  • Does variable Z have some finite range? – Stepan Novikov Aug 08 '17 at 11:28
  • Sorry for the delay, not really but f(Z,D) kind of maps it into an expected range. – rob_med Aug 30 '17 at 14:00

0 Answers0