I previously messed up with this question. I made it sound as though I'm asking about my particular implementation, but my question is actually about the general topic. I am pretty confident, that my implementation is OK. So I am rewriting this question:
WASAPI gives me information about the audio format that the audio engine accepts in shared mode. I know the expected bit depth of the samples I provide to the buffer. What I don't know, is the expected representation of the signal amplitude in the samples. For example, if the audio engine expects 32 bit samples, does it mean, that I should represent a sine wave amplitude as:
long
in range[min, max]
unsigned long
in range[0, max]
float
in range[min, max]
- or even something like
float
in range[-1, 1]
?
(max = std::numeric_limits<type>::max()
and min = ...::min()
in C++
)
So far I've been experimenting with this with different values by trial and error method. It seems, that only when my samples contain numbers max/2
or -min/2
(as a long
) alternating (along with other numbers), it produces a sound. Even numbers close to these (+- a few integers) produce the same results. When these two numbers (or numbers close to them) are not present in the samples, the result is silence no matter what I do.
It may be irrelevant, but I noticed, that these numbers' (max/2
and min/2
) bit representation (as long
s) is identical to IEEE float
bit representation of 2.0
and -2.0
. It still makes no sense to me, why it works like that.