8

I am using AVAssetImageGenerator to create an image from the last frame of a video. This usually works fine, but every now and then copyCGImageAtTime fails with the error

NSLocalizedDescription = "Cannot Open";
NSLocalizedFailureReason = "This media cannot be used.";
NSUnderlyingError = "Error Domain=NSOSStatusErrorDomain Code=-12431";

I am verifying that the AVAsset is not nil and I'm pulling the CMTime directly from the asset, so I do not understand why this keeps happening. This only happens when trying to get the last frame, if I use kCMTimeZero instead, it seems to work.

- (void)getLastFrameFromAsset:(AVAsset *)asset completionHandler:(void (^)(UIImage *image))completion
{
    NSAssert(asset, @"Tried to generate last frame from nil asset");
    AVAssetImageGenerator *gen = [[AVAssetImageGenerator alloc] initWithAsset:asset];
    gen.requestedTimeToleranceBefore = kCMTimeZero;
    gen.requestedTimeToleranceAfter = kCMTimeZero;
    gen.appliesPreferredTrackTransform = YES;
    CMTime time = [asset duration];
    NSError *error = nil;
    CMTime actualTime;

    CGImageRef imageRef = [gen copyCGImageAtTime:time actualTime:&actualTime error:&error];
    UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
    NSAssert(image, @"Failed at generating image from asset's last frame");
    completion(image);
    CGImageRelease(imageRef);
}

This seems to be related, but it did not solve my problem.

Community
  • 1
  • 1
Daniel Larsson
  • 6,278
  • 5
  • 44
  • 82

1 Answers1

2

Nothing guarantees that your asset's video track exists at [asset duration]. It's duration can be shorter than the whole asset. Since you set the tolerance to kCMTimeZero the only possible resolution is failure.

Edit: To clarify, the issue emerges when you have an asset with audio track slightly longer than the video track.

Artium
  • 5,147
  • 8
  • 39
  • 60
  • Let's say you had a different time for audio and video and you are using `kCMTimeZero` how would you then get a snapshot at a set time? Or what is the solution in this case, if you have audio slightly longer than video? – impression7vx Aug 02 '20 at 08:11
  • @impression7vx You need to access the video AVAssetTrack of the asset. There might be more than one so you will need to decide which (or take the first one). Then use it's `timeRange` property to find out when it starts and when it ends. https://developer.apple.com/documentation/avfoundation/avasset/1387140-trackswithmediatype?language=objc – Artium Aug 02 '20 at 18:47
  • 1
    Interesting, awesome. It worked to some degree; I did something like `CMTime(seconds: audioTrackTime-videoTrackTime, preferredTimeScale: 600)` however I still got errors so I did `CMTime(seconds: audioTrackTime-videoTrackTime+300, preferredTimeScale: 600)` to give it a buffer of half a second; It works, but seems very imprecise – impression7vx Aug 03 '20 at 00:10
  • @impression7vx I am not sure I understand. Why are you subtracting audio track time from video track time? – Artium Aug 03 '20 at 22:25
  • The `videoTrackTime` would give me a duration of `CMTime(seconds: 1250, preferredTimeScale: 600)` and `audioTrackTime` would be `CMTime(second: 1270, preferredTimeScale: 600)` so I would subtract the audio time to guarantee that if the audio was at the beginning, and the video was aligned to the end, then the video would start at `CMTime(seconds: 20, preferredTimeScale: 600)`. But, this was still not working so I added the .5 sec buffer. – impression7vx Aug 04 '20 at 13:54