My textbook for theory of computation has an example for explaining polynomial time algorithms:
PATH = {[G,s,t]|G is a directed graph that has a directed path from s to t}.
A polynomial time algorithm M for PATH operates as follows. M = “On input [G,s,t], where G is a directed graph with nodes s and t:
- Place a mark on node s.
- Repeat the following until no additional nodes are marked:
- Scan all the edges of G. If an edge (a,b) is found going from a marked node a to an unmarked node b, mark node b.
- If t is marked, accept. Otherwise, reject.”
Then they go on to explain how the algorithm runs in polynomial time:
Obviously, stages 1 and 4 are executed only once. Stage 3 runs at most m times because each time except the last it marks an additional node in G. Thus, the total number of stages used is at most 1+ 1+m, giving a polynomial in the size of G.
*m is the number of nodes in the graph
My question is that wouldn't Stage 3 run at most m-1 times instead of m times, since the first node is marked in stage 1?
Thanks!