1

Audio file will not play after reducing it using AVAssetReader/ AVAssetWriter

At the moment, the whole function is being executed fine, with no errors thrown. For some reason, when I go inside the document directory of the simulator via terminal, the audio file will not play through iTunes and comes up with error when trying to open with quicktime "QuickTime Player can't open "test1.m4a"

Does anyone specialise in this area and understand why this isn't working?

protocol FileConverterDelegate {
  func fileConversionCompleted()
}

class WKAudioTools: NSObject {

  var delegate: FileConverterDelegate?

  var url: URL?
  var assetReader: AVAssetReader?
  var assetWriter: AVAssetWriter?

  func convertAudio() {

    let documentDirectory = try! FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
    let exportURL = documentDirectory.appendingPathComponent(Assets.soundName1).appendingPathExtension("m4a")

    url = Bundle.main.url(forResource: Assets.soundName1, withExtension: Assets.mp3)

    guard let assetURL = url else { return }
    let asset = AVAsset(url: assetURL)

    //reader
    do {
      assetReader = try AVAssetReader(asset: asset)
    } catch let error {
      print("Error with reading >> \(error.localizedDescription)")
    }

    let assetReaderOutput = AVAssetReaderAudioMixOutput(audioTracks: asset.tracks, audioSettings: nil)
    //let assetReaderOutput = AVAssetReaderTrackOutput(track: track!, outputSettings: nil)

    guard let assetReader = assetReader else {
      print("reader is nil")
      return
    }

    if assetReader.canAdd(assetReaderOutput) == false {
      print("Can't add output to the reader ☹️")
      return
    }

    assetReader.add(assetReaderOutput)

    // writer
    do {
      assetWriter = try AVAssetWriter(outputURL: exportURL, fileType: .m4a)
    } catch let error {
      print("Error with writing >> \(error.localizedDescription)")
    }

    var channelLayout = AudioChannelLayout()

    memset(&channelLayout, 0, MemoryLayout.size(ofValue: channelLayout))
    channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo

    // use different values to affect the downsampling/compression
    let outputSettings: [String: Any] = [AVFormatIDKey: kAudioFormatMPEG4AAC,
                                         AVSampleRateKey: 44100.0,
                                         AVNumberOfChannelsKey: 2,
                                         AVEncoderBitRateKey: 128000,
                                         AVChannelLayoutKey: NSData(bytes: &channelLayout, length:  MemoryLayout.size(ofValue: channelLayout))]

    let assetWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: outputSettings)

    guard let assetWriter = assetWriter else { return }

    if assetWriter.canAdd(assetWriterInput) == false {
      print("Can't add asset writer input ☹️")
      return
    }

    assetWriter.add(assetWriterInput)
    assetWriterInput.expectsMediaDataInRealTime = false

    // MARK: - File conversion
    assetWriter.startWriting()
    assetReader.startReading()

    let audioTrack = asset.tracks[0]

    let startTime = CMTime(seconds: 0, preferredTimescale: audioTrack.naturalTimeScale)

    assetWriter.startSession(atSourceTime: startTime)

    // We need to do this on another thread, so let's set up a dispatch group...
    var convertedByteCount = 0
    let dispatchGroup = DispatchGroup()

    let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
    //... and go
    dispatchGroup.enter()
    assetWriterInput.requestMediaDataWhenReady(on: mediaInputQueue) {
      while assetWriterInput.isReadyForMoreMediaData {
        let nextBuffer = assetReaderOutput.copyNextSampleBuffer()

        if nextBuffer != nil {
          assetWriterInput.append(nextBuffer!)  // FIXME: Handle this safely
          convertedByteCount += CMSampleBufferGetTotalSampleSize(nextBuffer!)
        } else {
          // done!
          assetWriterInput.markAsFinished()
          assetReader.cancelReading()
          dispatchGroup.leave()

          DispatchQueue.main.async {
            // Notify delegate that conversion is complete
            self.delegate?.fileConversionCompleted()
            print("Process complete ")

            if assetWriter.status == .failed {
              print("Writing asset failed ☹️ Error: ", assetWriter.error)
            }
          }
          break
        }
      }
    }
  }
}
Joey Slomowitz
  • 179
  • 1
  • 12
  • Can you explain the purpose of the code? I see that you're saving an mp3 as an m4a but something else is going on too, because you wouldn't need the sample buffer stuff just for that. – matt Aug 16 '18 at 00:04
  • I used this reference for my solution >> https://gist.github.com/abeldomingues/fe8fa797fd55603f2f4a – Joey Slomowitz Aug 16 '18 at 00:44
  • My understanding is that the sample buffer is useful for being able to view the progress of the compression, but I don't believe that it's required. – Joey Slomowitz Aug 16 '18 at 00:46
  • Well, if _all_ you want to do is transcode the mp3, it's overkill... – matt Aug 16 '18 at 01:12

1 Answers1

1

You need to call finishWriting on your AVAssetWriter to get the output completely written:

assetWriter.finishWriting {
    DispatchQueue.main.async {
        // Notify delegate that conversion is complete
        self.delegate?.fileConversionCompleted()
        print("Process complete ")

        if assetWriter.status == .failed {
            print("Writing asset failed ☹️ Error: ", assetWriter.error)
        }
    }
}

If exportURL exists before you start the conversion, you should remove it, otherwise the conversion will fail:

try! FileManager.default.removeItem(at: exportURL)

As @matt points out, why the buffer stuff when you could do the conversion more simply with an AVAssetExportSession, and also why convert one of your own assets when you could distribute it already in the desired format?

Rhythmic Fistman
  • 34,352
  • 5
  • 87
  • 159
  • It worked!! Thank you so much! . Unfortunately I don't really understand this technology so well and the answers I found were quite limited. Does the sample buffer make the compression happen in packets? Or is this totally not needed? All I wanted to do was reduce a 27mb sound file before I pass it to the Apple Watch. Right now, this code brings it down to 10mb. Do you think `AVAssetExportSession` would be a much better candidate for this, and do you have a working example I might be able to check out? Thanks again. – Joey Slomowitz Aug 19 '18 at 23:35