Questions tagged [backpropagation]

Backpropagation is a method of the gradient computation, often used in artificial neural networks to perform gradient descent.

Backpropagation is a method of the gradient computation, often used in artificial neural networks to perform gradient descent. It led to a “renaissance” in the field of artificial neural network research.

In most cases, it requires a teacher that knows, or can calculate, the desired output for any input in the training set. The term is an abbreviation for "backward propagation of errors".

1267 questions
0
votes
2 answers

Summation giving random wrong numbers

I'm trying to implement an easy backpropagation algorithm for an exam (I'm a beginner programmer). I've got a set of arrays and I generate random weights to start the algorithm. I implemented the activation function following the math formula:…
Salvadi
  • 15
  • 4
0
votes
0 answers

Test method for multilayer perceptron

This is Multi-Layer Perceptron using Backpropagation algorithm.I found this code on codetidy.com and i want to test it . "mlp.java" /***** This ANN assumes a fully connected network *****/ import java.util.*; import java.io.*; public class MLP…
0
votes
1 answer

From XOR Neural Network to image recognition

I have a rudimentary XOR trained neural network working correctly with the following structure. 2 Inputs, 2 hidden nodes and 1 output. I would like to extend this to grayscale image recognition with NxN inputs, M hidden nodes and O outputs. My…
silent
  • 2,836
  • 10
  • 47
  • 73
0
votes
1 answer

Theano , recurrent neural network, error is nan

I am trying to duplicate the recent work on unitary evolution neural networks. Adapting from the code published by the author, I have written the following code import matplotlib.pyplot as plt import numpy as np import theano import theano.tensor as…
0
votes
0 answers

How can I tell my neural network is converging to a local minimum?

I've built a relatively simple artificial neural network in attempts to model the Value function in a Q-Learning problem, but to verify my implementation of the network was correct I am trying to solve the XOR problem. My network architecture uses…
Andnp
  • 674
  • 5
  • 16
0
votes
1 answer

L2 matrix rowwise normalization gradient

Im trying to implement L2 norm layer for convolutional neural network, and im stuck on backward pass: def forward(self, inputs): x, = inputs self._norm = np.expand_dims(np.linalg.norm(x, ord=2, axis=1), axis=1) z = np.divide(x,…
loknar
  • 539
  • 5
  • 12
0
votes
0 answers

Recognizing images in neural networks

I'm trying to realize a programme that recognizes images using a neural network with 1 hidden layer. The User is supposed to draw a number and the NN must recognize it. And i'm having some trouble So I get a 2d array which where 1 is a filled in…
Greenmachine
  • 292
  • 1
  • 15
0
votes
1 answer

Multiclass classification and the sigmoid function

Say have a training set Y : 1,0,1,0 0,1,1,0 0,0,1,1 0,0,1,0 And sigmoid function is defined as : As the sigmoid function ouputs a value between 0 and 1 does this mean that the training data and value's we are trying to predict should also fall…
blue-sky
  • 51,962
  • 152
  • 427
  • 752
0
votes
1 answer

Backpropagation outputs tend towards same value

I'm attempting to create a multilayer feedforward backpropagation neural network to recognize handwritten digits and I'm running into a problem where the activations in my output layer all tend towards the same value. I'm using the Optical…
Robert Lacher
  • 656
  • 1
  • 6
  • 16
0
votes
0 answers

Simple back-propagation with ReLU (rectified units) fails

I have simple code for traditional back-propagation (with the traditional sigmoid activation function) which is working fine. Then I changed the sigmoid to the rectifier, and it fails to converge even for the simple XOR test. I added "leakage" to…
Yan King Yin
  • 1,189
  • 1
  • 10
  • 25
0
votes
1 answer

torch backward through gModule

I have a graph as follows, where the input x has two paths to reach y. They are combined with a gModule that uses cMulTable. Now if I do gModule:backward(x,y), I get a table of two values. Do they correspond to the error derivative derived from the…
Jack Cheng
  • 121
  • 1
  • 11
0
votes
1 answer

Backpropagation makes network worse

i am experimenting with neural networks. I have a network with 8 input neurons, 5 hidden and 2 output. When i let the network learn with backpropagation, sometimes, it produces worse result between single iterations of training. What can be the…
user3396293
  • 9
  • 1
  • 2
0
votes
1 answer

How/When to update bias in RPROP neural network?

I am implementing this neural network for some classification problem. I initially tried back propagation but it takes longer to converge. So I though of using RPROP. In my test setup RPROP works fine for AND gate simulation but never converges for…
puru020
  • 808
  • 7
  • 20
0
votes
0 answers

Backpropagation learns for one dataset but fails at multiple datasets

Having an issue in my neural network where the error on the inputs gets enormously small (in the negative thousands). The network can learn one training set (ie 1+3=4) and will output four with inputs 1 and 3 but cant learn the generel pattern from…
BinkyNichols
  • 586
  • 4
  • 14
0
votes
1 answer

Testing how learning rate affects backpropagation, Artificial neural network

I have created an artificial neural network in Java that learns with a backpropagation algorithm, I have produced the following graph which shows how changing the learning rate affects the time it takes for the network to train. It seems to show…