I decided to make genetic algorithm to train neural networks. They will develop through inheritance, where one (of the many) variable gene should be transfer function.
So, I need to go more into depth of mathematics, and it is really time consumming.
I have for example three variants of transfer function gene.
1)log sigmoid function
2)tan sigmoid function
3)gaussian function
One of the features of the transfer function gene should be that it can modify parameters of function to get different shape of function.
And now, the problem that I am not cappable to solve yet:
I have error at output of neural network, and how to transfer it on the weights throug different functions with different parameters? According to my research I think it has something to do with derivatives and gradient descent.
I am high level math noob. Can someone explain me on simple example how to propagate error back on weights through parametrized (for exapmle) sigmoid function?
EDIT I am still doing research, and now I am not sure if I not misunderstand backpropagation. I found this doc http://www.google.cz/url?sa=t&rct=j&q=backpropagation+algorithm+sigmoid+examples&source=web&cd=10&ved=0CHwQFjAJ&url=http%3A%2F%2Fwww4.rgu.ac.uk%2Ffiles%2Fchapter3%2520-%2520bp.pdf&ei=ZF9CT-7PIsak4gTRypiiCA&usg=AFQjCNGWZjabH5ALbDLgSOBak-BTRGmS3g and they have some example of computing weights, where they do NOT involve transfer function into weight adjusting.
So is it not neccessary to involve transfer functions into weight adjusting?