0

I'm working on a home brew game engine and I am currently working on the audio engine implementation. This is mostly for self-educational reasons. I want to create an interface wrapper for generic audio processing, so I can switch between OpenAL, XAudio2 or other platforms as appropriate or needed. I also want this code to be reusable, so I am trying to make it as complete as possible, and have various systems implement as much functionality as possible. For the time being, I am focusing on an XAudio2 implementation and may move on to an OpenAL implementation at a later date.

I've read a good deal over the past few months on 3D processing (listener/emitter), environmental effects (reverberation), exclusion, occlusion, obstruction and direct sound. I want to be able to use any of these effects with audio playback. While I've researched the topics as best I can, I can't find any examples as to how occlusion (direct and reflection signal muffling), obstruction (direct signal muffling) or exclusion (reflection signal muffling) are actually implemented. Reading MSDN documentation seems to passive references to occlusion, but nothing directly about implementation. The best I've found is a generic "use a low-pass filter", which doesn't help me much

So my question is this: using XAudio2, how would one implement audio reflection signal muffling (exclusion) and audio direct signal muffling (obstruction) or both simultaneously (occlusion)? What would the audio graph look like, and how would these relate to reverberation environmental effects?

Edit 2013-03-26:
On further thinking about the graph, I realized that I may not be looking at the graph from the correct perspective.
Should the graph appear to be: Source → Effects (Submix) → Mastering
-or-
Should the graph appear generically as follows:

       ↗→   Direct   → Effects  ↘
Source                            →Mastering
       ↘→ Reflections → Effects ↗

The second graph would split the graph such that exclusion and obstruction could be calculated separately; part of my confusion has been how they would be processed independently.
I would think, then, that the reverb settings from the 3D audio DSP structure would be applied to the reflections path; that the doppler would be applied to either just the direct or both the direct and the reflections path; and that the reverb environmental effects would affect the reflections path only. Is this getting close to the correct audio graph model?

Erik Frantz
  • 437
  • 4
  • 13

1 Answers1

0

You want your graph to look something along the lines of:

Input Data ---> Lowpass Filter ---> Output

You adjust the Lowpass filter as the source becomes more obstructed. You can also use the lowpass filter gain to simulate absorption. The filter settings are best set up so that they are exposed in way that the could be adjusted by the Sound Designer.

This article covers sound propogation in more detail: http://engineroom.ubi.com/sound-propagation-and-diffraction-simulation/

In terms of this then been passed along the graph for environmental effects such as reverb, you just want those to be further down the graph:

Input ---> Low pass filter ---> Output ---> Reverb ----> Master Out

This way the reverberated sound will match the occluded sound (otherwise it will sound odd having the reverb mismatched to the direct signal).

Using a low pass filter sounds vague and incomplete, but there is not actually much more to the effect than filtering the high frequencies and adjusting the gain. For more advanced environmental modelling you want to research something like "Precomputed Wave Simulation for Real-Time Sound Propagation of Dynamic Sources in Complex Scenes" (I'm unable to link directly as I don't have enough rep yet!) but it may well be beyond the scope of what you are trying to achieve.

Mike Jones
  • 111
  • 2
  • Thanks for the answer. After reading, it doesn't quite seem to fit the implementation aspect of my question. I've edited my question with some thoughts to the audio graph that I may have been confused about. Essentially, just dumping a low pass filter would appear to affect both the direct and the reflections, which I want to know how to separately apply. – Erik Frantz Mar 26 '13 at 17:57
  • The lowpass filter is approximating the sum of both the direct and indirect signals. If you want to separate them I would do that at the input stage. The indirect signal passes through the lowpass filter, etc as before. The direct signal splits off at input and would feed into a gain (so that you can vary the direct/indirect balance) and then rejoin the main signal path at the Output stage (just before the reverb). You want both to feed into the reverb as the reverb is modelling where you are listening from (not where the sound has come from). – Mike Jones Mar 27 '13 at 15:49