Isn't the back propagation algorithm independent algorithm or do we need any other algorithms such as Bayesian along with it for neural network learning?And do we need any probabilistic approach for implementing back propagation algorithm?

- 76,138
- 12
- 138
- 194

- 71
- 10
-
Is independent method – janisz Jun 15 '16 at 08:32
-
Could you please elaborate what problem you try to solve exactly? – Dennis B. Jun 15 '16 at 08:33
-
Actually I was just implementing back propagation algorithm for neural network training;learning , but I got to know that I need other algorithms along with back propagation for learning like back propagation with bayesian or k-Means ..........My project is Expert system for predicting diabetes......Is it necessary to implement other algorithm along with back propagation or back propagation is enough? – rozi Jun 16 '16 at 23:43
1 Answers
Back propagation is just an efficient way of computing gradients in computational graphs. That's all. You do not have to use it (although computing gradients without it is extremely expensive), and what you do with it is up to you - there are hundreads of ways to use gradients. The most common one is to use it in order to run first order optimization techniques (such as SGD, RMSProp or Adam). Thus to address your question - backpropagation is enough if and only if your task is to compute a gradient. For learning neural net you need at least one more piece - an actual learning algorithm (such as SGD, which is literally a single line of code). It is hard to say how "independet" it is from other methods, as as I said - gradients can be used everywhere.

- 64,777
- 8
- 131
- 164
-
1I am getting confused here.........Isn't adjusting gradient is adjusting weights and adjusting weights is learning? – rozi Jun 20 '16 at 11:29
-
Backpropagation **is not** adjusting weights. backpropagation is supposed only to compute gradients, not **apply** them. If you are adjusting weights it means that you are already using some learning scheme on top, probably SGD (if you just update weights in the form of new_w_i -= learning_rate * gradient_i; backpropagation as such is supposed only to compute "gradient_i") – lejlot Jun 20 '16 at 11:37
-
So it is that the delta rule and back propagation are two different entities? – rozi Jun 20 '16 at 11:47
-
Yes. Delta rule is another name (although using this name is highly misleading as it suggests that this is some "special method", while it is just a direct application of one of the oldest optimization techniques) for a simple **gradient descent** method (often applied together with BP). Backpropagation is a term used to describe the linear method of computing the gradients wrt. to all weights in multilayer neural network (in general - directed graphical system). – lejlot Jun 20 '16 at 12:53
-
1That means that Gradient Desent is the algorithm we can use for whole learning process? – rozi Jun 28 '16 at 06:06