Yes, it's just a way to build a Huffman code that has limit on codeword length.
A Huffman code encodes every letter of the alphabet with a binary string that can be uniquely determined. For example, if your alphabet is {A, B, C} and A is more common than B and C, the following encoding can work well:
A - 0
B - 10
C - 11
An encoded string such as 0010110 can be uniquely encoded, because the length of the encoding bit string is defined already by the Huffman code (here --- every string that begins with 0 has length 1, and every string that beings with 1 has length 2). So the string decodes to 0|0|10|11|0 = AABCA.
Now the "problem" in constructing Huffman codes is how to select the encoding bit strings so that the resulting encodings are on the average as short as possible. In your problem there is an additional constraint that the length of any code word cannot exceed L. The general idea is to use shorter strings to encode more common symbols.
The details of the package-merge algorithm aren't important, as the key is that you use an algorithm to select "set of coins of minimum numismatic value whose denominations total n - 1". If you have coins with denominations 2−1, 2−2, ..., and you want to build total value of 100 cents out of them, you can think of this process as having first a coin of value 100, and then splitting it to two 50 cents (2−1), and then continuing to split your coins in half as long as you want, e.g. 50 cents + 25 cents + 12.5 cents + 12.5 cents. This corresponds to the construction of a binary tree; whenever you split a coin, you create an internal node in a binary tree and add two leaves on one level deeper.
Now the idea of minimizing the numismatic value is that those "coins" that are linked to "higher frequency" symbols are more expensive to use, so you want to split those coins less, corresponding to having shorter codes.
The details are left as an exercise to the reader.