-1

I am developing a python app for neural network image processing

I am giving a dataset of what image should look like after processing Now I have 50 images of universe in training set. As an input I give blank black image, as a result I give each of my training set images.

I made 100 epochs training with 5 hidden neurons; however when I try to activate my network with different inputs, I get the same result. Seems like the output consists only of training set images layered on each other.

Here is the code and a screenshot of last activation: https://gist.github.com/anonymous/6e0e125bddcbb594c1a79c3a28d5d8af

result of activation

PS: If it's still not clear what the problem is (I got warning for not being clear), I don't want to get same results obviously. The question is how to get it working.

Max Larionov
  • 420
  • 6
  • 19

2 Answers2

1

If you give black images as input, that means you give images containing only 0, so an array of 0.

So whatever the number of layers or their types, the output will always be 0 during the training (whatever value multiplied by 0 is 0). The only thing that makes that your output could not be 0, is if you use a bias in each layer. The bias is this little constant added to the input of each neuron. So in your case, I think that the output is always the same because your NN converged to use only the bias in your layers. A white image, or better the original (raw) image would be better.

Btw, what is your NN architecture? how many weights does it contain?

FiReTiTi
  • 5,597
  • 12
  • 30
  • 58
  • OK, I am trying to figure out what bias is in context of NN and how to use it. I tried non-black images for now. No effect. – Max Larionov Jul 25 '16 at 21:49
  • I've edited my answer. Btw, I don't know why your question was down voted, don't give it any attention. – FiReTiTi Jul 25 '16 at 22:59
  • I currently have no idea what weights are, researching it now. At the same time I've added non-black image as input, added bias=True property to the network, and training it now with 10 hidden layers and 100 epochs. Takes time. – Max Larionov Jul 25 '16 at 23:17
  • When I ask about your NN architecture, it means: number and type of layers, activation function, and the number of weights. Train a NN means estimate the best value of each weight. – FiReTiTi Jul 25 '16 at 23:51
  • `net = buildNetwork(921600, 10, 921600, bias=True)` This I use to build network. 921600 is number of pixels. 10 is number of hidden neurons. – Max Larionov Jul 26 '16 at 11:03
  • Which means that you certainly have 921600x10+10x921600+biases in your NN. And it looks like that you have a single hidden layer, not 5. It seems that you have more weights in your NN than pixels on your images, so you overfit. – FiReTiTi Jul 26 '16 at 15:39
  • Oops, yes, my bad, it was 5 hidden neurons, edited question. So number of weights should be equals to number of pixels in my case? How to achieve it with pybrain? – Max Larionov Jul 26 '16 at 21:55
  • I don't know for pybrain, but in all cases, a NN needs at least 5 times (ideally 10) more training data than weights, else you face overfitting. – FiReTiTi Jul 26 '16 at 22:28
1

As an input I give blank black image, as a result I give each of my training set images.

The fact that you are giving blank black images as input and the result as the processed image is plain wrong. You are telling the Neural Network (NN) to generate a result as close to your test images from nothing. NN are smart, but not magic.

So despite all your training, the NN just ignores the data (or gives it very less importance, since the input in training was an array of only 0) and produces an overlay image. For confirmation, look into the converged weights and bias of the each layer.

The solution is simple. Instead of using blank images as input, use the raw images as inputs and their processed images as output.

Kunal Tyagi
  • 549
  • 4
  • 12
  • So if I use one and the same raw image as input and as an output set versions of this raw image processed with desired filters, would it make more sense? – Max Larionov Jul 25 '16 at 21:59
  • No. A NN is a linear system, responding to different inputs. Same image implies the same input. There is no way you can have a linear system respond to same input in different manners. It is like a function. ```f(x)``` can not have 2 or more different values at x. For each of the 50 images, use the raw image as input and the corresponding processed image as the result of the NN while training it. – Kunal Tyagi Jul 26 '16 at 17:20
  • It makes sense. I have understood it as soon as I started thinking of it as a function and approximation. Thanks for your answer. – Max Larionov Jul 26 '16 at 18:55
  • "A NN is a linear system", it's not true when you use non linear activation functions like sigmoid or tanh. – FiReTiTi Jul 26 '16 at 22:26
  • Sorry, my mistake. Meant injective function, not a linear function. The example on ```f(x)``` certainly doesn't make that mistake. Correction: A NN is an injective (one-to-one) system, responding to different inputs.There is no way you can have it respond to same input in different manners. – Kunal Tyagi Jul 27 '16 at 06:08