-1

Edited question

How many iterations should you make for the simulation to be an accurate 'Monte Carlo simulation' for Bit error rate calculations?

What is the minimum value? If I want to repeat the simulation by an exponentially growing number for five times? should I start from 1e2 thus>> iterations = [1e2 1e3 1e4 1e5 1e6] or 1e3 >> [1e3 1e4 1e5 1e6 1e7]? or something else? what is the common practice?

Additional info: I used [8e3 1e4 3e4 5e4 8e4 1e5] before but that is not enough according to the prof. because the result is not satisfactory.

Simulations take a very long time on my computer so I cannot keep changing the iterations based on the result. If there is a common practice about this, please let me know.

Thanks @BillBokeey for helping me edit the question.

HappyBee
  • 133
  • 6
  • 3
    Well, this is an awkward question. The `Monte Carlo simulation` is a stochastic process, so any number bewteen one and infinity would fit in the definition. A better question would include the word `convergence` (And be posted on http://math.stackexchange.com/) – BillBokeey May 09 '16 at 07:37
  • My professor told me to increase the number of iterations because this is a 'monte carlo' simulation... so I assumed monte carlo simulations have a minimum number of iterations. @BillBokeey – HappyBee May 09 '16 at 07:38
  • 1
    A Monte Carlo algorithm **converges** (meaning the result gets closer to the solution) as the number of iterations tends to infinity. Increasing the number of iterations thus gets you closer from the solution. If you know the expected result of the simulation (e.g. you are trying to approximate the value of pi with the Monte Carlo method), you can define a tolerance and stopo your algorithm when you get closer from pi than the tolerance you defined – BillBokeey May 09 '16 at 07:41
  • 1
    As per your edit, I think what your professor wants you to do is to run the simulation for **exponentially increasing** numbers of iterations and plot the result versus the number of iterations in order to show that this algorithm will indeed converge – BillBokeey May 09 '16 at 07:44
  • @BillBokeey yes, in that case, would be enough to do 1e2 then 1e3 then 1e4 ? Or should I start from 1e3? What is the common practice? – HappyBee May 09 '16 at 07:48
  • Start to wherever you want, plot the results, do it again if it you don't like what you get. In order to get a nice plot, I'd say you need at least 9 points (1e0 to 1e9 would already be nice). But again, this is just a matter of opinion, you should really try it yourself (as always) – BillBokeey May 09 '16 at 07:54
  • Can you explain how BER is calculated? Several people seem to be advocating trial & error with plotting but it would be better to tackle this analytically, which requires knowledge of the structure of the estimator. – pjs Dec 07 '20 at 17:46

2 Answers2

0

What your professor propose strikes me as qualitative, but not quantitative way to estimate the convergence of your simulation.

Frankly, I don't know how BER is computed, but I deal a lot with some integral calculations by MC.

In such case you sample xi over some interval and compute fMC = Si fi / N, where S denotes summation. We know that fMC will converge to true value with variance of sigma2/N (or std.deviation of sigma/sqrt(N)). What do we do then, we compute in the same simulation estimation of sigma, assume for large enough N to be good approximation of sigma and get simulation error plotted. IN practical terms alongside with fMC we compute second momentum sum and average as f2MC = Si f2i / N, and at the end get s=sqrt(f2MC - (fMC)2)/sqrt(N) as estimated error of the MC simulation (it will be a bit biased though).

Thus you could plot on the same graph value of BER and statistical error of the simulation. You could even do better - ask user to input required statistical error (say, in %, meaning user enters s/f*100), and continue simulation in bunches till you reach required precision.

THen you could judge if 109 points are enough or not...

Severin Pappadeux
  • 18,636
  • 3
  • 38
  • 64
0

Assuming that we denote our simulated BER as Pb_hat and that Pb_hat in [(1 - alpha)Pb, (1 + alpha)Pb], where Pb is the true BER, and alpha is the percent deviation tolerance (e.g., 0.1), then from [van Trees 2013, pg. 83] we know that the number of Monte Carlo trials required to obtain Pb_hat with a confidence probability pc is K=(c / alpha)^2 x (1-Pb) / Pb, with c given in Table I.

Table I: confidence interval probabilities from the Gaussian distribution

pc 0.900 0.950 0.954 0.990 0.997
c 1.645 1.960 2.000 2.576 3.000

Example: Suppose we want to simulate a BER of 10^-4 with a percent deviation tolerance of 0.01 and a confidence probability 0.950, then from Table I we know that c = 1.960 and by applying the formula K = (1.96/0.01)^2 x (1-10^-4)/10^-4 = 384121584 Monte Carlo trials. This is a surprisingly large value, though.

As a rule of thumb, K should be on the order of 1O/BER [Jeruchim 1984]

[van Trees 2013] H. L. van Trees, K. L. Bell, and Z. Tian, Detection, estimation, and filtering theory, 2nd ed., Hoboken, NJ: Wiley, 2013.

[Jeruchim 1984] M. Jeruchim, "Techniques for Estimating the Bit Error Rate in the Simulation of Digital Communication Systems," in IEEE Journal on Selected Areas in Communications, vol. 2, no. 1, pp. 153-170, January 1984, doi: 10.1109/JSAC.1984.1146031.

  • A normality approximation would not apply with your specified tolerance limits, because 10^-4 - 1.96*0.01 dips well into negative (infeasible) territory. – pjs Dec 07 '20 at 20:32