You could switch from using ffprobe to MediaInfo [1]. This can get you the duration and file size of an input file. From there you can use the calculations from the article and use MediaConvert to create your sprite images. After the job completes you can use the COMPLETE CloudWatch Event to trigger a postprocessing workflow that can create the sprite manifest. By default the COMPLETE details section will include the output path to the last JPEG frame capture from your MediaConvert output [2].
As a pre-processing step you can gather the duration of the file as well as the size. If you only require the size of the file, then you can use the S3 SDK to get the content-length of the file (calls HEAD to the object) and calculate from there.
s3 = boto3.client('s3')
response = s3.head_object(Bucket='bucket', Key='keyname')
size = response['ContentLength']
print(size)
You can also use parameter overrides in job templates. For example you can create a job that reference a template but you can specify settings that override those settings with you call the CreateJob API.
The following is for presets however the same concept applies to Job Templates.
Using the following HLS System Presets
System-Avc_16x9_720p_29_97fps_3500kbps
System-Avc_16x9_360p_29_97fps_1200kbps
System-Avc_16x9_270p_14_99fps_400kbps
JSON Payload changes the Audio Selector Source Name for Output 2 (System-Avc_16x9_360p_29_97fps_1200kbps) and Output 3 (System-Avc_16x9_270p_14_99fps_400kbps)
{
"Queue": "arn:aws:mediaconvert:us-west-2:111122223333:queues/Default",
"UserMetadata": {},
"Role": "arn:aws:iam::111122223333:role/EMFRoleSPNames",
"Settings": {
"OutputGroups": [
{
"Name": "Apple HLS",
"Outputs": [
{
"NameModifier": "_1",
"Preset": "System-Avc_16x9_720p_29_97fps_3500kbps"
},
{
"NameModifier": "_2",
"Preset": "System-Avc_16x9_360p_29_97fps_1200kbps",
"AudioDescriptions": [
{
"AudioSourceName": "Audio Selector 2"
}
]
},
{
"NameModifier": "_3",
"Preset": "System-Avc_16x9_270p_14_99fps_400kbps",
"AudioDescriptions": [
{
"AudioSourceName": "Audio Selector 2"
}
]
}
],
"OutputGroupSettings": {
"Type": "HLS_GROUP_SETTINGS",
"HlsGroupSettings": {
"ManifestDurationFormat": "INTEGER",
"SegmentLength": 10,
"TimedMetadataId3Period": 10,
"CaptionLanguageSetting": "OMIT",
"Destination": "s3://myawsbucket/out/master",
"TimedMetadataId3Frame": "PRIV",
"CodecSpecification": "RFC_4281",
"OutputSelection": "MANIFESTS_AND_SEGMENTS",
"ProgramDateTimePeriod": 600,
"MinSegmentLength": 0,
"DirectoryStructure": "SINGLE_DIRECTORY",
"ProgramDateTime": "EXCLUDE",
"SegmentControl": "SEGMENTED_FILES",
"ManifestCompression": "NONE",
"ClientCache": "ENABLED",
"StreamInfResolution": "INCLUDE"
}
}
}
],
"Inputs": [
{
"AudioSelectors": {
"Audio Selector 1": {
"Offset": 0,
"DefaultSelection": "DEFAULT",
"ProgramSelection": 1,
"SelectorType": "TRACK",
"Tracks": [
1
]
},
"Audio Selector 2": {
"Offset": 0,
"DefaultSelection": "NOT_DEFAULT",
"ProgramSelection": 1,
"SelectorType": "TRACK",
"Tracks": [
2
]
}
},
"VideoSelector": {
"ColorSpace": "FOLLOW"
},
"FilterEnable": "AUTO",
"PsiControl": "USE_PSI",
"FilterStrength": 0,
"DeblockFilter": "DISABLED",
"DenoiseFilter": "DISABLED",
"TimecodeSource": "EMBEDDED",
"FileInput": "s3://myawsbucket/input/test.mp4"
}
],
"TimecodeConfig": {
"Source": "EMBEDDED"
}
}
}
== Resources ==
[1] https://aws.amazon.com/blogs/media/running-mediainfo-as-an-aws-lambda-function/
[2] https://docs.aws.amazon.com/mediaconvert/latest/ug/file-group-with-frame-capture-output.html