My question isn't about some specific code but rather theoritical, so my apologies in advance if that's not the right place to post it.
Given a certain problem - for the sake of discussion, let it be a permutation problem - with an input of size N, and P the constant time needed to calculate 1 permutation; in this case, we need about T = ~ (P)*(N!) = O(N!) time to produce all results. If by any chance our algorithm takes much more than the expected time, it may be safe to assume it doesn't terminate.
For example, for P = 0.5 secs, N = 8, ⇒ T = 0.5*8! = 20.160 secs. Any value above T is 'suspicious' time.
My question is, how can one introduce a probability function which asymptotically reaches 1, while running time increases infinitely?
Our 'evaluation' shall depend on a constant time P, the size N of our input, and the time complexity Q of our current problem, thus it may have this form: f(P, N, Q) = ... , whereas 0 ≤ f ≤ 1, f increasing, and f indicates the probability of having 'fallen' in a non-terminating state.
If this approach isnt enough, how can we make sure that after a certain amount of time, our program is probably running endlessly?