In the Article Advanced Octrees 2: node representations it is stated:
The AABB of the node can be stored explicitly as before, or it can be computed from the node’s tree depth stored implicitly inside the locational code. To derive the tree depth at a node from its locational code a flag bit is required to indicate the end of the locational code. Without such a flag it wouldn’t be possible to distinguish e.g. between 001 and 000 001. By using a 1 bit to mark the end of the sequence 1 001 can be easily distinguished from 1 000 001. Using such a flag is equivalent to setting the locational code of the Octree root to 1.
The location code is a 32 bit word. That is, ... 001
and ... 000 001
might be equal as the author says iff all the bits following the first example equals those of the second example.
How does marking the end of the sequence help us finding the depth of a node within the tree?
The author uses the example ... 1 001
and ... 1 000 001
. Does the first node have depth 1 and the second depth 2? If so, how do I know? The location code is a 32 bit word so the bits following "the end flag" can follow as ... 001 001
which is also valid node.
So what I really don't understand is how to store in a 32 bit word both the location code and the depth of the node within the tree.