3

Given a trained system, a network can be run backward with output values and partial inputs to find the value of a missing input value. Is there a name for this operation?

In example with a trained XOR network with 2 input neurons (with values 1 and X) and an output layer neuron (with value 1). If someone wanted to find what the value of the second input neuron was, they could feed the information backwards can calculate that it would be close to 0. What exactly is this operation called?

Beryllium
  • 556
  • 1
  • 7
  • 20

2 Answers2

0

I think your issue is related to Feature Extraction and Feature selection is the process of selecting a subset of relevant features (variables, predictors) for use in model construction. Also This article is related to your issue.

Ali Soltani
  • 9,589
  • 5
  • 30
  • 55
0

The Backwards Pass:

The goal with back propagation is to update each of the weights in the network so that they cause the actual output to be closer the target output, thereby minimising the error for each output neuron and the network as a whole. This is the step you wanted to know i guess.

  • I'm not updating weights and certain values are to be assumed for in the input neurons, allowing the missing one to be solved. – Beryllium Sep 28 '17 at 16:55