3

I have a PHP app that uploads videos (from little ones - 1MB - to big ones - 400MB).

Everything works fine, except for some particular files.

These files always presents a MD5 checksum error:

WS Error Code: BadDigest, Status Code: 400, AWS Request ID: 89BBC1D79A4492A7, AWS Error Type: client, AWS Error Message: The Content-MD5 you specified did not match what we received.

I verified the MD5 and it really doesn't mach, but I have no idea why!

If it was a corruption error, the returned S3 MD5 would vary, but it's always the same.

In my local machine (a Mac) and on the server (Ubuntu), the MD5 is:

9131ee88a194b555d0a3519f67294f31

In Amazon, it is:

8e6789baf9c5d434003a5443d30143fa

The upload is made with this excerpt of code:

    try
    {
        $start = (float) array_sum(explode(' ',microtime()));

        $save_path = "/tmp/$video[quality]/$video[video_id].mp4";
        $db_path = "$video[channel]/$video[quality]/$video[video_id].mp4";
        $bytes = number_format(filesize($save_path) / 1048576, 2) . ' MB';

        System_Daemon::info(($i + 1) . "/$num_of_videos - started upload of $video[channel] - $video[video_id] with $bytes");

        $s3 = Aws\S3\S3Client::factory(
                        array(
                            'key' => 'MY KEY',
                            'secret' => 'MY SECRET',
                            'region' => Region::US_EAST_1
                        )
        );


        $results = $s3->putObject(array(
            'Bucket' => 'media.tubelivery.com',
            'Key' => $db_path,
            'Body' => fopen($save_path, 'r'),
            'ACL' => Aws\S3\Enum\CannedAcl::PUBLIC_READ
                ));

        //Delete the original file
        unlink($save_path);
        clearstatcache($save_path);

        //Change the video state to 0
        update_video_state_to_uploaded_to_S3($video['id']);

        $end = (float) array_sum(explode(' ',microtime()));
        $time = sprintf("%.4f", ($end - $start)) . " sec";

        System_Daemon::info("uploaded video " . ($i + 1) . " to $db_path in $time");
    }
    catch (Aws\S3\Exception\S3Exception $e)
    {
        System_Daemon::err("ERROR uploading $video[video_id].mp4 to S3");
        foreach($results as $key => $result)
        {
            System_Daemon::err("$key => $result");
        }
        $save_path = "/tmp/$video[quality]/$video[video_id].mp4";   
        clearstatcache($save_path);
        System_Daemon::err("ERROR: $e");
    }

This is the exact log:

[Mar 13 02:35:53]      err: ERROR uploading 4XwKKMlGibo.mp4 to S3 [l:145]
[Mar 13 02:35:53]      err: Expiration =>  [l:148]
[Mar 13 02:35:53]      err: ServerSideEncryption =>  [l:148]
[Mar 13 02:35:53]      err: ETag => "7f65e3f892d96b9703d411219e2b868a" [l:148]
[Mar 13 02:35:53]      err: VersionId =>  [l:148]
[Mar 13 02:35:53]      err: RequestId => 80821CC621946236 [l:148]
[Mar 13 02:35:53]      err: ERROR: Aws\S3\Exception\BadDigestException: 
                            AWS Error Code: BadDigest, 
                            Status Code: 400, 
                            AWS Request ID: 7C4B4834C6235D1A, 
                            AWS Error Type: client, 
                            AWS Error Message: The Content-MD5 you specified did not match what we received. [l:152]

Any ideas? What am I doing wrong?

rodrigo-silveira
  • 12,607
  • 11
  • 69
  • 123
Eduardo Russo
  • 4,121
  • 2
  • 22
  • 38
  • 1
    `'Body' => fopen($save_path, 'r'),` is this exactly you send stream to the SDK? I had the same error - it was in stream. Before send stream in putObject make him `rewind`. `rewind( $stream );` ... `'Body' => $stream,` – Nicolai Dec 17 '13 at 16:15
  • @Nicolai, as long as I remember (I'm not working in this project anymore), I used exactly the code Amazon gave me. Anyway, I hope this help someone :) – Eduardo Russo Dec 19 '13 at 08:20

1 Answers1

2

I was working with a large data transfer with S3 and had the exact same error message. It turned out that the library we were using made use of the multipart upload functionality for files above a certain size. I'm unsure if the PHP SDK does the same thing, but apparently the way that multipart objects are stored within S3 causes a different hash to be generated than you would get when doing it on your local machine.

I saw that this question was very old, but wanted you to know you were not alone!