Questions tagged [backpropagation]

Backpropagation is a method of the gradient computation, often used in artificial neural networks to perform gradient descent.

Backpropagation is a method of the gradient computation, often used in artificial neural networks to perform gradient descent. It led to a “renaissance” in the field of artificial neural network research.

In most cases, it requires a teacher that knows, or can calculate, the desired output for any input in the training set. The term is an abbreviation for "backward propagation of errors".

1267 questions
-1
votes
1 answer

Strange result Neural network Python

I followed an article here: TowardsDataScience. I wrote math equations about the network, everything made sense. However, after writing the code, results are pretty strange, like it is predicting always same class... I spent a lot of time on it,…
-1
votes
1 answer

Backpropagation Cost Function Error Increases instead of Decreasing

I am new to python and Machine Learning. Can Someone please let me know what is the problem in the implementation of ann backpropagation algorithm. The error values seem to be increasing instead of decreasing. Code is as Follows As It can be seen in…
-1
votes
1 answer

How to increase accuracy of network running on MNIST

I followed this code: https://github.com/HyTruongSon/Neural-Network-MNIST-CPP It is quite easy to understand. It produces 94% accuracy. I have to convert it to a network with deeper layers(ranging from 5 to 10). In order to make my self comfortable,…
-1
votes
1 answer

Computation of weights in a neural network

How do I compute the weights of a neural network by hand if I have the training samples (X) and desired output (D), and I shall have one node in the output layer and sign as the activation function in the hidden layer as well as in the output…
-1
votes
1 answer

Does the sigmoid function effect the slowdown for weights not connected to the output layer when using cross entropy function?

I've been reading on error functions for neural nets on my own. http://neuralnetworksanddeeplearning.com/chap3.html explains that using cross entropy function avoids slowdown (ie the network learns faster if the predicted output is far from the…
B2VSi
  • 161
  • 1
  • 6
-1
votes
2 answers

What does it mean when training and validation accuracy are 1.000 but results are still poor?

I am using Keras to perform landmark detection - specifically locating parts of the body on a picture of a human. I have gathered around 2,000 training samples and am using rmsprop w/ mse loss function. After training my CNN, I am left with loss:…
-1
votes
1 answer

Backpropagation - how neuron-error influences the net parameters (weights, biases)

While I am trying to get the deep learning flow, I can not find out one detail -> when I reach an error on every neuron (in backpropagations flow), what is next I should do with that all errors. The calibration of model is about adjusting the…
-1
votes
1 answer

Why we can not optimize a Neural Network model with two cost function in series?

I'm trying to implement a neural network in which I want optimized two cost function. Could you please let me know your thoughts about an approach that does the following: for i in it ... min lose_1 // modified the weight matrix W…
-1
votes
1 answer

Decrement learning rate in error back propagation algorithm

This is more or less general question, in my implementation of backpropagation algorithm, I start from some "big" learning rate, and then decrease it after I see the error started to grow, instead of narrowing down. I am able to do this rate…
-1
votes
1 answer

How to reinforce forward propagation signal in deep learning network?

I've asked a question earlier at Matconvnet output of deep network's marix is uniform valued instead of varying values? As I debugged the deep network for density estimation, I realized the signal towards the output dies out/fades. How can I…
h612
  • 544
  • 2
  • 11
-1
votes
1 answer

Can you use just backpropagation to teach a Neural Network to play turn based games?

What i mean is games like chess, draughts, tic tac toe, 2048, Super Mario?, in general games that require multiple plays, moves to complete. I'm pretty sure one could use Genetic Algorithms, but i'm willing to know if there's a way to train it with…
YoDevil
  • 41
  • 8
-1
votes
1 answer

Is back-propagation outdated?

Back in the day in university (around 2011, 2012) I was introduced to back-propagation as the state-of-the-art in training feed-forward artificial neural networks. In the examples to Tensorflow I have seen modern spins on gradient descent (e.g.…
Make42
  • 12,236
  • 24
  • 79
  • 155
-1
votes
1 answer

explain backpropagation algorithm bishop codes

I've recently completed Professor Ng's Machine Learning course on Coursera, but I have some problem with understanding backpropagation algorithm. so I try to read Bishop codes for backpropagation using sigmoid function. I searched and found clean…
-1
votes
1 answer

Is Back propagation Algorithm independent algorithm

Isn't the back propagation algorithm independent algorithm or do we need any other algorithms such as Bayesian along with it for neural network learning?And do we need any probabilistic approach for implementing back propagation algorithm?
-1
votes
1 answer

When should I use linear neural networks and when non-linear?

I am using feed forward, gradient descent backpropagation neural networks. Currently I have only worked with non-linear networks where tanh is activation function. I was wondering. What kind of tasks would you give to a neural networks with…