0

My end goal is to simultaneously output a wav file and record on a mic and display the two on top of each other on a plot. This does not require low latency, but it does require the two plots to be overlaid in a way that they correctly represent what is happening in real time. If I connect the mic line to the speaker line, the graphs should line up on top of each other relatively well.

output on top of input plot

Since there seems to be some amount of latency, my solution is to use the time variable given by the callback. This should let me shift the plot appropriately when plotting it to adjust for latency. I am receiving appropriate values for the DAC time and the current time, but ADC time is giving me 0.

def callback(indata, outdata, frames, time, status):
print "ADC time: ", time.inputBufferAdcTime
print "DAC time: ", time.outputBufferDacTime
print "curr time: ", time.currentTime
print "time diff: ", time.outputBufferDacTime - time.currentTime
print "###############"
if status:
    print(status)
if len(data[callback.index:]) < frames:
    outdata[:len(data[callback.index:])] = data[callback.index:]
    outdata[len(data[callback.index:]):] = np.zeros(
        ((len(outdata) - len(data[callback.index:])), len(args.channels)))
    raise sd.CallbackStop
else:
    outdata[:] = data[callback.index:callback.index + frames]
q_out.put(outdata[::args.downsample, mapping])
q_in.put(indata[::args.downsample, mapping])

callback.index += frames
Don
  • 1
  • 1

1 Answers1

0

If inputBufferAdcTime is giving you 0, that's a PortAudio issue. This very likely depends on the host API you are using and probably on your hardware and drivers. You should ask on the PortAudio mailing list (http://portaudio.com/contacts.html) if you want to know more about this.

Or you can try a different host API, it may work better.

You can also try to query the latency attribute of the stream object, but it may have the same problem.

Finally, if you can't get the correct latency that way, you can simply measure the latency by yourself. Just connect a cable from an output of your sound card to an input, play some test signal and record the result (like you did it in your code example). You should be able to find the latency value from comparing the two signals.

Matthias
  • 4,524
  • 2
  • 31
  • 50
  • My current solution is to do what you suggested last. I have a test audio file that has a single impulse that I use to calibrate. Unfortunately, it seems that the latency can change. Even though it is a slight change, it is large enough that it will effect what I am trying to do. The latency values during calibration vary anywhere between 0.125 to 0.14 seconds. The latency attribute of the stream object gives back two values every time I look at it in the callback: 0.02575 for input and 0.0257596 for output. I'm going to take a look at your first suggestion and see where that leads. TY – Don Sep 12 '18 at 19:25
  • I found a decent solution for now. I increased my blocksize being used for input/output from 1366 to 8000. This drastically increases the latency up to 0.422 seconds, but the consistency of the latency is much higher. This allows the calibration method to be effective. – Don Sep 12 '18 at 20:07
  • You could also try to use block sizes that are a power of two (e.g. 1024). This might give you more stable performance. – Matthias Sep 13 '18 at 08:22