3

I decided to make genetic algorithm to train neural networks. They will develop through inheritance, where one (of the many) variable gene should be transfer function.

So, I need to go more into depth of mathematics, and it is really time consumming.

I have for example three variants of transfer function gene.

1)log sigmoid function

2)tan sigmoid function

3)gaussian function

One of the features of the transfer function gene should be that it can modify parameters of function to get different shape of function.

And now, the problem that I am not cappable to solve yet:

I have error at output of neural network, and how to transfer it on the weights throug different functions with different parameters? According to my research I think it has something to do with derivatives and gradient descent.

I am high level math noob. Can someone explain me on simple example how to propagate error back on weights through parametrized (for exapmle) sigmoid function?

EDIT I am still doing research, and now I am not sure if I not misunderstand backpropagation. I found this doc http://www.google.cz/url?sa=t&rct=j&q=backpropagation+algorithm+sigmoid+examples&source=web&cd=10&ved=0CHwQFjAJ&url=http%3A%2F%2Fwww4.rgu.ac.uk%2Ffiles%2Fchapter3%2520-%2520bp.pdf&ei=ZF9CT-7PIsak4gTRypiiCA&usg=AFQjCNGWZjabH5ALbDLgSOBak-BTRGmS3g and they have some example of computing weights, where they do NOT involve transfer function into weight adjusting.

So is it not neccessary to involve transfer functions into weight adjusting?

John
  • 503
  • 2
  • 10
  • 25
  • I want to use java. I have clear mind about implementing everything else except backpropagation through parametrised transfer functions. – John Feb 20 '12 at 14:29
  • 1
    There's no shortcut as @Novak says. Take a few hours out and watch the videos here: http://www.ml-class.org/course/video/preview_list – YXD Feb 20 '12 at 16:13

1 Answers1

4

Backpropagation does indeed have something to do with derivatives and gradient descents.
I don't think there is any shortcut to truly understanding the math, but this may help-- I wrote it for someone else with basically the same question, and should at least explain at a high level what's going on, and why.

How does a back-propagation training algorithm work?

Community
  • 1
  • 1
Novak
  • 4,687
  • 2
  • 26
  • 64