I'm trying to split the audio file into a given number of NumPy blocks and put those into RAM to play, just like play_a_very_long_sound_file. Unfortunately, I believe my knowledge of NumPy arrays and audio files in general is lacking. Once this code is working, I'd like to add a recording process in the callback function. With the code I currently have, I'm getting:
ValueError: could not convert string to float: ''
which is taking place in the second if
block within the callback function. I'm trying to add zeros to the end of the outdata
block, but am unsure of how to go about it.
Another curious thing is that I'm only getting a numpy array returned from soundfile.read()
, which normally returns a numpy array and a samplerate. But I'm assuming this is caused by me breaking it up into frames.
def callback(outdata, frames, time, status):
assert frames == args.blocksize
if status.output_underflow:
print('Output underflow: increase blocksize?', file=sys.stderr)
raise sd.CallbackAbort
assert not status
try:
data = q.get_nowait()
except queue.Empty:
print('Buffer is empty: increase buffersize?', file=sys.stderr)
raise sd.CallbackAbort
if len(data) < len(outdata):
outdata[:len(data),0] = data
outdata[len(data):,0] = b'\x00' * (len(outdata) - len(data))
raise sd.CallbackStop
else:
outdata[:,0] = data
try:
with sf.SoundFile(args.filename) as f:
for _ in range(args.buffersize):
data = f.read(frames=args.blocksize, dtype='float32')
if data.size == 0:
break
q.put_nowait(data) # Pre-fill queue
stream = sd.OutputStream(
samplerate=f.samplerate, blocksize=args.blocksize,
device=args.device, channels=f.channels, dtype='float32',
callback=callback, finished_callback=event.set)
with stream:
timeout = args.blocksize * args.buffersize / f.samplerate
while data.size != 0:
data = f.read(args.blocksize, dtype='float32')
q.put(data, timeout=timeout)
event.wait() # Wait until playback is finished