You can either let the variable in big-O notation for the asymptotic complexity be the input number itself, or it can be the number of bits needed to express that number. These two conventions give rise to dramatically different asymptotic classifications, so it is important to be clear about which one you use when you report the result.
In general people tend to use the number-of-bits convention when they are speaking about cases where the number is so large that you need bignums, and the meaning-of-the-number convention when the inputs are bounded by the size of a machine word. But that's not something you can rely on other than to get a first guess which you'll need to verify for yourself makes sense in your particular situation.
The choice tends to go hand-in-hand with the cost model you're using for arithmetic operations. When you're counting bits it's typical to assume that arithmetic on n-bit values take O(n) time, whereas when you're working with the meaning of the input number, you typically assume that arithmetic on numbers work in constant time.
In your case you would get something like O(2^n) or O(sqrt(m)) where n is the number of bits in the input and m is the input itself. (Details depend on how your multiset primitives perform).
See also pseudo-polynomial time.