0

I am making a simple software which reads a map of events from a json file and creates sound based off of it.

The way I’m doing this is by having a objects which is the soundfile.read of a wav file and a function which makes a sample position based on MS (So if my sound file was 48KHz then 1000 MS would equate to 48000) and then I am calling that function in a for loop of each element of the timeline and adding numpy.zeros to either side, see below:

import soundfile, numpy

audio = soundfile.read('mySound.wav')

timeline = [900, 1200] # Just an example timeline

array = numpy.zeros(100000) # Just a base array, in my program it finds the proper length
for action in timeline:
    samplePos = action*48 # Convert MS to sample position
    paddedFront = numpy.append(numpy.zeros(samplePos), audio)
    paddedBack = numpy.append(paddedFront, numpy.zeros(100000-paddedFront.size)) # Making a padded array that’s the same shape as the base
    array += paddedBack # Merging the two

soundfile.write('output.wav', array, 48000)

But when doing this, the audio sounds weird. Any ideas? Thanks!

DeKrypt
  • 33
  • 4

0 Answers0