0

My question isn't about some specific code but rather theoritical, so my apologies in advance if that's not the right place to post it.

Given a certain problem - for the sake of discussion, let it be a permutation problem - with an input of size N, and P the constant time needed to calculate 1 permutation; in this case, we need about T = ~ (P)*(N!) = O(N!) time to produce all results. If by any chance our algorithm takes much more than the expected time, it may be safe to assume it doesn't terminate.

For example, for P = 0.5 secs, N = 8, ⇒ T = 0.5*8! = 20.160 secs. Any value above T is 'suspicious' time.

My question is, how can one introduce a probability function which asymptotically reaches 1, while running time increases infinitely?

Our 'evaluation' shall depend on a constant time P, the size N of our input, and the time complexity Q of our current problem, thus it may have this form: f(P, N, Q) = ... , whereas 0 ≤ f ≤ 1, f increasing, and f indicates the probability of having 'fallen' in a non-terminating state.

If this approach isnt enough, how can we make sure that after a certain amount of time, our program is probably running endlessly?

TimzyPatzy
  • 95
  • 4
  • There are no non-terminating algorithms, you mean implementations perhaps. If you are talking about NP class of problems then those are dependent on constraints and if you can assign probabilities to the constraints with certain confidence then you could I suppose – sramalingam24 Jun 29 '19 at 14:09
  • 1
    Well, if you have some `most probable` time estimate and denote it Tm, you could assign it to half-life. Then probability of things going wrong would be `P(t|Tm)=1-2^(-t/Tm)` and it would go to 1 as time goes up – Severin Pappadeux Jun 30 '19 at 16:21

0 Answers0