0

I have a working soft-synth, which outputs and plays samples correctly, however I have a huge latency, about one second. My code is based of an article found here: http://www.drdobbs.com/jvm/creating-music-components-in-java/229700113?pgno=2

Did I overlook something? The generation of the samples isn't the problem, that happens quickly and is simple.

I have tried changing the buffer size to several different values without any success. I am currently testing on an OSX machine, could this be the problem?

fyi, done is never false. When it's time for silence I simply feed samples of 0 to the buffer.

public class Player extends Thread {
    public static final int SAMPLE_RATE = 44100;
    public static final int BUFFER_SIZE = 2200;
    public static final int SAMPLES_PER_BUFFER = BUFFER_SIZE / 2;
    private static final int SAMPLE_SIZE = 16; // Don't change
    private static final int CHANNELS = 1;
    private static final boolean SIGNED = true;
    private static final boolean BIG_ENDIAN = true;
    private AudioFormat format;
    private DataLine.Info info;
    private SourceDataLine audioLine;
    private boolean done;
    private byte[] sampleData = new byte[BUFFER_SIZE];
    private Oscillator osc;

    public Player(Oscillator osc) {
        format = new AudioFormat(SAMPLE_RATE, SAMPLE_SIZE, CHANNELS, SIGNED, BIG_ENDIAN);
        info = new DataLine.Info(SourceDataLine.class, format);
        this.osc = osc;
    }

    public void run() {
        done = false;
        int bytesRead = 0;

        try {
            audioLine = (SourceDataLine) AudioSystem.getLine(info);
            audioLine.open(format);
            audioLine.start();

            while ((bytesRead != -1) && !done) {
                bytesRead = osc.getSamples(sampleData);

                if (bytesRead > 0) {
                    audioLine.write(sampleData, 0, bytesRead);
                }
            }
        } catch (LineUnavailableException e) {
            e.printStackTrace();  //To change body of catch statement use File | Settings | File Templates.
        } finally {
            audioLine.drain();
            audioLine.close();
        }
    }
}
Parth Mehrotra
  • 2,712
  • 1
  • 23
  • 31

1 Answers1

0

You don't appear to be setting the audio buffer size used by the audio system - instead you are using SAMPLES_PER_BUFFER to control the number of samples that are generated at once.

These are not the same thing - the operating system will be requesting renders of whatever buffer size it uses (and on MacOSX, samples are pulled through a callback).

This code:

        while ((bytesRead != -1) && !done) {
            bytesRead = osc.getSamples(sampleData);

            if (bytesRead > 0) {
                audioLine.write(sampleData, 0, bytesRead);

will simply be filling up the audioline's buffer until it blocks. A render request will then arrive, which empties the buffer and it then unblocks again. This will happen at the rate of the audio buffer size.

marko
  • 9,029
  • 4
  • 30
  • 46