I just finished with the first module in the algo specialization course in coursera.
There was an exam question that i could not quite understand. I have passed that exam, so there's no point for me to retake it.
Out of curiosity, I want to learn the principles around this question.
The question was posted as such:
Suppose that a randomized algorithm succeeds (e.g., correctly computes the minimum cut of a graph) with probability p (with 0 < p < 1). Let ϵ be a small positive number (less than 1).
How many independent times do you need to run the algorithm to ensure that, with probability at least 1−ϵ, at least one trial succeeds?
The options given were:
log(1−p)/logϵ
log(p)/logϵ
logϵ/log(p)
logϵ/log(1−p)
I made two attempts and both were wrong. My attempts were:
- log(1−p)/logϵ
- logϵ/log(1−p)
It's not so much I want to know the right answer. I want to learn the principles behind this question and what it's asking for. So that I know how to answer similar questions in future.
I have posted this on the forum, but nobody answered after a month. So I am trying it out here.
NO need to post the answer directly. If you got me to get to aha moment, i will mark it as correct.
Thanks.