1

I am trying to implement a general SOM with batch training. and i have doubt regarding the formula for batch training.

i have read about it in the following link

http://cs-www.cs.yale.edu/c2/images/uploads/HR15.pdf

https://notendur.hi.is//~benedikt/Courses/Mia_report2.pdf

i noticed that the weight updates are assigned rather than added at the end of an epoch - wouldn't that overwrite the whole networks previous values, and the update formula did not include the previous weights of the nodes, then how does it even work?

when i was implementing it, a lot of the nodes in network became NaN because the neighborhood value became zero for a lot of nodes due to gradient decrease at the end of training and the update formula resulted in a division by zero.

can someone explain the batch algorithm correctly. i DID google it, and i saw a lot of "improving batch" or "speeding up batch" but nothing about just batch kohonen directly. and among the ones that did explain the formula was the same and that doesn't work.

Adithya Sama
  • 345
  • 3
  • 16

1 Answers1

0

The update rule of the Batch SOM that you see is the good one. The basic idea behind this algorithm is to train your SOM using the whole training dataset and so at each iteration, the weights of your neurons re present the mean of the closest inputs. And so, the information of the previous weights are in the BMU (Best matching Unit).

As you said, some neurons weights produce NaN due to division by zero. To overcome this problem you can use neighbor function that is always greater than zero (for example a Gaussian function).