If you use AudioTrack
or SoundPool
, you won't be able to synchronize playing precisely, and most probably that will be easy to hear. Another problem is that your files can have different sound level, and thus the softer ones will get masked by the louder ones (think of playing some pop track on top of a piece of soft classical music). You will probably need to normalize the levels of the input sound files.
For precise merging and normalizing at the same time, you have to mix the sound contents of the files yourself, and then play the resulting output. You will need to read the .wav files yourself as binary streams, as Android lacks Java's AudioInputStream
class which could make this easier. See this as an example of how to do it with basic IO classes:
https://thiscouldbebetter.wordpress.com/2011/08/14/reading-and-writing-a-wav-file-in-java/
Now you have your sound data as an array of short
integers. This answer describes how to convert samples data from integer into floating point and merge two audio streams, while normalizing their sound level:
https://stackoverflow.com/a/32023546/4477684
In order to merge more that two sound inputs, start with merging two, and then merge the result with the third input, and so on.