0

I've written this small signal generating method. My goal is to generate a beep with a slight time delay between the two channels (left and right) or a slight difference in gain between the channels. Currently I create the delay by filling a buffer with zeros for one channel and values for the second and further down swapping the behavior between the channels (If you have any tips or ideas how to do this better it would be appreciated.) The next stage is doing something similar with the gain. I have seen that Java gives built in gain control via FloatControl:

FloatControl gainControl = 
            (FloatControl) sdl.getControl(FloatControl.Type.MASTER_GAIN);

But I am not sure how to control the gain for each channel separately. Is there a built in way to do this? Would I need two separate streams, one for each channel? If so how do I play them simultaneously? I am rather new to sound programming, if there are better ways to do this please let me know. Any help is very much appreciated.

This is my code so far:

    public static void generateTone(int delayR, int delayL, double gainRightDB, double gainLeftDB)
        throws LineUnavailableException, IOException {

    // in hz, number of samples in one second
    int sampleRate = 100000;    // let sample rate and frequency be the same

    // how much to add to each side:
    double gainLeft = 100;//Math.pow(10.0, gainLeftDB / 20.0);
    double gainRight = 100;// Math.pow(10.0, gainRightDB / 20.0);;

    // click duration = 40 us
    double duration = 0.08;
    double durationInSamples = Math.ceil(duration * sampleRate);

    // single delay window duration = 225 us
    double baseDelay = 0.000225;
    double samplesPerDelay = Math.ceil(baseDelay * sampleRate);

    AudioFormat af;

    byte buf[] = new byte[sampleRate * 4];                  // one second of audio in total
    af = new AudioFormat(sampleRate, 16, 2, true, true);    // 44100 Hz, 16 bit, 2 channels


    SourceDataLine sdl = AudioSystem.getSourceDataLine(af);

    sdl.open(af);

    sdl.start();

    // only one should be delayed at a time
    int delayRight = delayR;
    int delayLeft = delayL;

    int freq = 1000;

    /*
     * NOTE:
     * The buffer holds data in groups of 4. Every 4 bytes represent a single sample. The first 2 bytes
     * are for the left side, the other two are for the right. We take 2 each time because of a 16 bit rate.
     * 
     * 
     */
    for(int i = 0; i < sampleRate * 4; i++){
        double time = ((double)i/((double)sampleRate));

        // Left side:
        if (i >= delayLeft * samplesPerDelay * 4                // when the left side plays 
                && i % 4 < 2                                    // access first two bytes in sample
                && i <= (delayLeft * 4 * samplesPerDelay)
                + (4 * durationInSamples))                      // make sure to stop after your delay window

            buf[i] = (byte) ((1+gainLeft) * Math.sin(2*Math.PI*(freq)*time));                   // sound in left ear
        //Right side:
        else if (i >= delayRight * samplesPerDelay * 4          // time for right side
                && i % 4 >= 2                                   // use second 2 bytes
                && i <= (delayRight * 4 * samplesPerDelay)
                + (4 * durationInSamples))                      // stop after your delay window


            buf[i] = (byte) ((1+gainRight) * Math.sin(2*Math.PI*(freq)*time));                  // sound in right ear

    }

    for (byte b : buf)
        System.out.print(b + " ");
    System.out.println();

    sdl.write(buf,0,buf.length);
    sdl.drain();
    sdl.stop();


    sdl.close();
}
TheFooBarWay
  • 594
  • 1
  • 7
  • 17
  • *"..not sure how to control the gain for each channel separately."* [`FloatControl.Type.BALANCE`](http://docs.oracle.com/javase/8/docs/api/javax/sound/sampled/FloatControl.Type.html#BALANCE) .. – Andrew Thompson Apr 27 '16 at 02:59
  • For better help sooner, post a [MCVE] or [Short, Self Contained, Correct Example](http://www.sscce.org/). – Andrew Thompson Apr 27 '16 at 03:00

1 Answers1

1

How far apart did you want to have your beeps? I wrote a program that made sine beeps sound up to a couple hundred frames (at 44100 fps) apart, and posted it with source code here which you are welcome to inspect/copy/rewrite.

At such low levels of separation, the sound remains fused, perceptually, but can start to move to one ear or another. I wrote this because I wanted to compare volume panning with delay-based panning. In order to be able to flexibly test multiple files, the code is a slightly more modular than what you have started with. I'm not going to claim what I wrote is any better, though.

One class takes a mono PCM (range is floats, -1 to 1) array and converts it to a stereo array with the desired frame delay between the channels. That same class can also split the mono file into a stereo file where the only difference is volume, and has a third option where you can use a combination of delay and volume differences when you turn the mono data to stereo.

Monofile: F1, F2, F3, ... Stereofile F1L, F1R, F2L, F2R, F3L, F3R, ...

but if you add delay, say 2 frames to the right:

Stereofile F1L, 0, F2L, 0, F3L, F1R, F4L, F2R, ...

Where F is a normalized float (between -1 and 1) representing an audio wave.

Making the first mono array of a beep is just a matter of using a sine function pretty much as you do. You might 'round off the edges' by ramping the volume over the course of some frames to minimize the clicks that come from the discontinuities of suddenly starting or stopping.

Another class was written whose sole purpose is to output stereo float arrays via a SourceDataLine. Volume is handled by multiplying the audio output by a factor that ranges from 0 to 1. The normalized values are multiplied by 32767 to convert them to signed shorts, and the shorts broken into bytes for the format that I use (16-bit, 44100 fps, stereo, little-endian).

Having an array-playing audio class is kind of neat. The arrays for it are a lot like Clips, but you have direct access to the data. With this class, you can build and reuse many sound arrays. I think I have some code included that loads a wav file into this DIY clip, as well.

There is more discussion of this code on this thread at Java-Gaming.org.

I eventually used some of what I learned here to make a simplified real-time 3D sound system. The "best" way to set up something like this, though, depends on your goals. For my 3D, for example, I wrote a delay tool that allows separate reads from stereo left and right, and the audio mixer & playback is more involved than the simple array-to-SourceDataLine player.

Phil Freihofner
  • 7,645
  • 1
  • 20
  • 41
  • Thank you very much for the input, your program looks like it can be very useful. I will take a look into it. Originally the distance between the sounds was supposed to be very small, around 40-100 uSec. However at this point I'm willing to just go as low as the average sound card will let me. As a small follow up question, after searching some more I discovered that FloatControl also supports PAN and BALANCE actions. Are these any different from what you have done here? – TheFooBarWay Apr 27 '16 at 12:40
  • I think the limits of a sound cards can be thought of as the highest frame rate it will support. Standard sound card can support 44100 fps, which means 1 frame is about 22.5 microseconds? Might be possible to go smaller using linear interpolation, e.g., frame 1 R gets 90% frame 1 L and 10% frame 2 L. I've not found the PAN and BALANCE controls to be consistently reliable or as responsive as working with the individual frames. When they work, they are usually tied to operating on a full buffer of data, not the individual frame level, so not much granularity. – Phil Freihofner Apr 27 '16 at 19:27