So this is a rather specific question about the openAl implementation on ios. There are 2 parts.
1.) When humans hear sound we can typically tell if the sound is coming from behind or in front of us because the sound coming from behind is more muffled and reaches our inner ear differently.
Is this accounted for in the OpenAL implementation? I can't really tell from playing around with it.
2.) When humans hear sound, there is a slight delay between the time a sound reaches our left ear and our right ear depending on where the source is.
Is this accounted for in the OpenAL implementation?