The algorithm doesn't start by searching for any node, but instead is already passed a node and will start from that node. E.g. pseudocode for F
would look like this:
F(n):
if n has right child
n = right child of n
while n has left child
n = left child of n
return n
else
prev = n
cur = parent of n
while prev is right child of cur and cur is not root
prev = cur
cur = parent of prev
if cur is root and prev is right child of cur
error "Reached end of traversal"
else
return cur
The above code basically does an in-order traversal of a tree starting from a node until the next node is reached.
Amortized runtime:
Pick an arbitrary tree and m
. Let r_0
be the lowest common ancestor of all nodes visited by F
. Now define r_(n + 1)
as the lowest common ancestor of all nodes in the right subtree of r_n
that will be returned by F
. This recursion bottoms out for r_u
, which will be the m
-th node in in-order traversal. Any r_n
will be returned by F
in some iteration, so all nodes in the left subtree of r_n
will be returned by F
as well.
All nodes that will be visited by F
are either also returned by F
or are nodes on the path from r_0
to r_u
. Since r_0
is an ancestor of r_1
and r_1
is an ancestor of r_2
, etc., the path from r_0
to r_u
can be at most as long as the right subtree is high. The height of the tree is limited by log_phi(m + 2)
, so in total at most
m + log_phi(m + 2)
nodes will be visited during m
iterations of F
. All nodes visited by F
form a subtree, so there are at most 2 * (m + log_phi(m + 2))
edges that will be traversed by the algorithm, leading to an amortized runtime-complexity of
2 * (m + log_phi(m + 2)) / m = 2 + 2 * log_phi(m + 2) / m = O(1)
(The above bounds are in reality considerably tighter, but for the calculation presented here completely sufficient)