I'm working on a home brew game engine and I am currently working on the audio engine implementation. This is mostly for self-educational reasons. I want to create an interface wrapper for generic audio processing, so I can switch between OpenAL, XAudio2 or other platforms as appropriate or needed. I also want this code to be reusable, so I am trying to make it as complete as possible, and have various systems implement as much functionality as possible. For the time being, I am focusing on an XAudio2 implementation and may move on to an OpenAL implementation at a later date.
I've read a good deal over the past few months on 3D processing (listener/emitter), environmental effects (reverberation), exclusion, occlusion, obstruction and direct sound. I want to be able to use any of these effects with audio playback. While I've researched the topics as best I can, I can't find any examples as to how occlusion (direct and reflection signal muffling), obstruction (direct signal muffling) or exclusion (reflection signal muffling) are actually implemented. Reading MSDN documentation seems to passive references to occlusion, but nothing directly about implementation. The best I've found is a generic "use a low-pass filter", which doesn't help me much
So my question is this: using XAudio2, how would one implement audio reflection signal muffling (exclusion) and audio direct signal muffling (obstruction) or both simultaneously (occlusion)? What would the audio graph look like, and how would these relate to reverberation environmental effects?
Edit 2013-03-26:
On further thinking about the graph, I realized that I may not be looking at the graph from the correct perspective.
Should the graph appear to be: Source → Effects (Submix) → Mastering
-or-
Should the graph appear generically as follows:
↗→ Direct → Effects ↘
Source →Mastering
↘→ Reflections → Effects ↗
The second graph would split the graph such that exclusion and obstruction could be calculated separately; part of my confusion has been how they would be processed independently.
I would think, then, that the reverb settings from the 3D audio DSP structure would be applied to the reflections path; that the doppler would be applied to either just the direct or both the direct and the reflections path; and that the reverb environmental effects would affect the reflections path only. Is this getting close to the correct audio graph model?