I've got this exercise where I can't figure out what I'm missing. It's actually very similar to an existing question, but not the same and the solution is probably not the same. The existing question: Finding the most frequent number from a set of ranges -
The problem:
You are given n ranges [ai, bi] in the range of [0, nd], where d is a constant positive integer. ai, bi are also integers, and 0 <= ai <= bi <= nd for each i (i = 1, ..., n).
Write an algorithm to find the integer z that appears in the greatest amount of ranges [ai, bi]. The complexity must be linear (as a function of n, d).
It seems to me like a classic counting/radix sort problem: use k = n as the radix and the complexity of the sort becomes O(d*n). The question is, what to do next? I'm a bit stuck on this. In the related question, there is an assumption that the range can only be a certain size, while in my question there is no such assumption, and theoretically there can be a range of [0, nd] and anything in between.
Random example:
Input: [1, 3], [5, 10], [5, 11], [6, 17], [8, 9], [9, 21], [11, 15], [12, 12] Output: 9