Because of type promotion.
When you do the division of a complex by a real, like (inf + 0j) / 2
, the (real) divisor gets promoted to 2 + 0j
.
And by complex division, the imaginary part is equal to (0 * 2 - inf * 0) / 4
. Note the inf * 0
here which is an indeterminate form, and it evaluates to NaN
. This makes the imaginary part NaN
.
And back to the topic. When numpy
calculates the mean of a complex array, it really doesn't try to do anything clever. First it reduces the array with the "addition" operation, obtaining the sum. After that, the sum is divided by the count. This sum contains an inf
in the real part, which causes the trouble described above when the divisor (count) gets promoted from integral type to complex floating point.
Edit: a word about solution
The IEEE floating point "infinity" is really a very primitive construct that represents indeterminate forms like 1 / 0
. These forms are not constant numbers, but possible limits. The special inf
or NaN
"floating point numbers" are placeholders that notifies you about the presence of indeterminate forms. They do nothing about the existence or type of the limit, which you must determine by the mathematical context.
Even for real numbers, the underlying limit can depend on how you approach the limit. A superficial 1 / 0
form can go to positive or negative infinity. On the complex plane, things are even more complex (well). For example, you may run into branch cuts and (different kinds of) singularities. There's no universal solution that fits all.
Tl;dr: Fix the underlying problem in the face of ambiguous/incomplete/corrupted data, or prove that the end computational result can withstand such corruption (which can happen).