0

My textbook for theory of computation has an example for explaining polynomial time algorithms:

PATH = {[G,s,t]|G is a directed graph that has a directed path from s to t}.

A polynomial time algorithm M for PATH operates as follows. M = “On input [G,s,t], where G is a directed graph with nodes s and t:

  1. Place a mark on node s.
  2. Repeat the following until no additional nodes are marked:
  3. Scan all the edges of G. If an edge (a,b) is found going from a marked node a to an unmarked node b, mark node b.
  4. If t is marked, accept. Otherwise, reject.”

Then they go on to explain how the algorithm runs in polynomial time:

Obviously, stages 1 and 4 are executed only once. Stage 3 runs at most m times because each time except the last it marks an additional node in G. Thus, the total number of stages used is at most 1+ 1+m, giving a polynomial in the size of G.

*m is the number of nodes in the graph

My question is that wouldn't Stage 3 run at most m-1 times instead of m times, since the first node is marked in stage 1?

Thanks!

Nathan Miranda
  • 69
  • 1
  • 14

1 Answers1

1

It runs up to m-1 times where it marks an additional node other than s and then 1 time where it finds no additional node to mark.

David Eisenstat
  • 64,237
  • 7
  • 60
  • 120