I want to merge a binary heap implemented as an array with size m into another heap (of the same kind) with n elements. Using repeated inserts, this takes O(m * log(n)). It seems to be commonly agreed that the alternative method of concatenating the arrays and then rebuilding the heap, taking O(m + n) is more efficient.
Now it seems clear to me that for some pairs (m, n) with m < n, the O(m * log(n)) method would be more efficient. According to Wolfram Alpha this is the case for m < (n * log(2)) / log(n / 2). Is this interpretation correct? And is a check worth the implementation effort/runtime-hit in practice?