A RBM is an unsupervised learning paradigm, and therefore is difficult to access whether one is better than another.
Nevertheless, they are usually used as a pre-training of recent and more exciting networks such as DBN. So my suggestion would be to train as much RBMs as you want to compare (unsupervised learning) and then give them to a feedforward layer for learning (supervised learning). From here you can now access how good your RBM is by measuring how good your network is to predict the class of your data.
As an example, lets have 2 RBMs (A and B):
you give A to a feedforward layer (train with backpropagation) and
got an accuracy with the test data of 80%;
you give B to a feedforward layer (train with backpropagation) and
got an accuracy with the test data of 90%;
As such, B is a better RBM than A, as it provided better features, leading to a better training and higher out-of-sample results. Note: as accuracy of networks vary, make sure you perform the supervised training several times and average them at the end so that your comparison is robust.
EDIT:
Regarding non-supervised evaluation the task is not as simple. As presented by Tijmen Tieleman in "Training Restricted Boltzmann Machines using Approximations to the Likelihood Gradient":
One of the evaluations is how well the learned RBM models the test
data, i.e. log likelihood. This is intractable for regular size RBMs,
because the time complexity of that computation is exponential in the
size of the smallest layer (visible or hidden)
Yet, if you have small enough RBMs this is a possible approach. Otherwise, you can just wait...