I'm trying to implement RBM, then i used play tennis case to test the rbm.
I've tried autoencoder before, and the result was good. Actually, I confuse with the function of RBM it self, i think it just like autoencoder, encode the input (each instance) for feature extraction then we can test or validate the model (network) to tried encode and decode some instances.
But the problem that i faced was the result for some function in RBM it's seems weird.
For example the result from gibbs sampling, the result for sampling data is so close with the actual data. The effect is the result from h(x) from sampling data and actual data is close enough.
so when i tried to compare the result decode all unit that in hidden layer back to actual value, the result is bad, the result for each feature (unit) almost same, about 0.4 to 0.5.
and then the f(x) = 1/m*sigma(log(p(x))) for lost function it self is just about 0.07142857142857142, it's never change (the change just about 0.00000000000000001 or 0.00000000000000002).
I used continue value for each feature, using standard normalization so the input range value is 0 to 1.
Any one have suggestion?
*sorry for my bad english :D