As a part of a school project, I had to read a paper by Steven Lawrence about using SOMs and CCNs to detect faces. For those of you who are curious, heres the paper: http://clgiles.ist.psu.edu/papers/UMD-CS-TR-3608.face.hybrid.neural.nets.pdf
On page 12 of the paper, Lawrence describes how he uses the SOMs to reduce the dimensionality of the face data. However, I do not understand how this works. In this example, Lawrence uses a 5x5x5 SOM, with input vectors that are 25D. If my understanding is correct, when the training process is done, you will be left with a 25D vector attached to each of the neurons in the net. So, how did this reduce the dimensions of the data? Where exactly is the reduced-dimension data on a self organizing map? Ive researched in alot of places, but for some reason, I could not find the answer to this question. As this question has been bugging me for awhile now, it would be be greatly appreciated if someone could answer it for me.