I do not know if this is a relevant forum, but I have been following this tutorial here:
http://www.openimaj.org/tutorial/eigenfaces.html
For some reason it is not clear enough for me. There are some things I do not understand. It says at one point,
The first step in implementing an Eigenfaces recogniser is to use the training images to learn the PCA basis which we'll use to project the images into features we can use for recognition. The EigenImages class needs a list of images from which to learn the basis (i.e. all the training images from each person), and also needs to know how many dimensions we want our features to be (i.e. how many of the eigenvectors corresponding to the biggest eigenvalues to keep):
The it goes writing this code:
List<FImage> basisImages = DatasetAdaptors.asList(training);
int nEigenvectors = 100;
EigenImages eigen = new EigenImages(nEigenvectors);
eigen.train(basisImages);
So I do not get it. What exactly is the train()
method training? From what I got it is just applying PCA right? In my mind training is always associated with a perceptron for example or another neural network or an algorithm with parameters.
Also I need some help understanding Exercise 13.1.1,
13.1.1. Exercise 1: Reconstructing faces
An interesting property of the features extracted by the Eigenfaces algorithm (specifically from the PCA process) is that it's possible to reconstruct an estimate of the original image from the feature. Try doing this by building a PCA basis as described above, and then extract the feature of a randomly selected face from the test-set. Use the EigenImages#reconstruct() to convert the feature back into an image and display it. You will need to normalise the image (FImage#normalise()) to ensure it displays correctly as the reconstruction might give pixel values bigger than 1 or smaller than 0.
In the examples there is some code that already extracts the features:
Map<String, DoubleFV[]> features = new HashMap<String, DoubleFV[]>();
for (final String person : training.getGroups()) {
final DoubleFV[] fvs = new DoubleFV[nTraining];
for (int i = 0; i < nTraining; i++) {
final FImage face = training.get(person).get(i);
fvs[i] = eigen.extractFeature(face);
}
features.put(person, fvs);
}
So if I just call this:
eigen.reconstruct(fvs[i]).normalise()
-> returns an image I can display, which looks like a normal face but it is really small in dimensions (is that normal?).
Should do it ?
Thanks.