1

In RBM, it makes all relationships within node in probability. then how data can be propagated through RBM? just first order sampling? doesnt then have too much fluctuation?

or does it work like feed forward mlp? (ie. hj = \sum_i vi * wij) but in RBM there is no concept of this since all the paper says is that it is stochastic model.

forsythia
  • 265
  • 1
  • 2
  • 5
  • You should give more information. Give us link to your paper and related math expressions – Atilla Ozgur Nov 11 '14 at 09:06
  • Usually if you are trying to generate data Gibbs Sampling is run for a long time. If you are talking about hidden Layer neurons, the it dpends on for what you are training. In DBN, the samples from one step of Gibbs Sampling is used. If you are using a RBM to pre-train a Deep Feed-Forward Neural Network, then the sigmoidal of the weighted sum of inputs plus bias is used i.e, h_j = sigmoid( \sum_i v_i * W_ij) – Rudra Murthy Jan 08 '15 at 18:23

0 Answers0