3

I'm using pdftk and doing some testing and finding that bursting a multipage PDF file into separate single page PDF files, and then generating an md5 hash checksum (digital fingerprint) for each of those single page PDFs results in a different hash every time I do the burst. This is the result even if it's the exact same file with no changes.

My test process is:

  1. Decompress test.pdf (a simple text-only PDF that contains 10 pages)
  2. Using pdftk, burst (split) test.pdf into 10 separate PDF files (1 page per file)
  3. Generate md5 hash checksum for each of the 10 single-page PDF files
  4. Record the 10 hash checksums
  5. Repeat steps 1-4
  6. Note that all hashes differ

Side note: generating a checksum on the PDF after decompression yields the exact same checksum upon repetition.

I'm using node.js and its crypto module for this exercise.

My question is: Why do the checksums differ upon repetition? I would think that the resulting 10 single-page files are exactly the same as the last time they were created. Their parent document (and thus the individual pages themselves) has not changed at all.

k00k
  • 17,314
  • 13
  • 59
  • 86
  • 1
    I'm seriously wondering why you didn't look for the differences in the PDF files themselves. A simple `vimdiff 1strun.pdf 2ndrun.pdf` would have revealed what's different... (or any other file comparison method of your liking). – Kurt Pfeifle Jul 10 '12 at 17:15
  • Excellent point, I guess my brain was stuck on them being the same. Sometimes you need to step away for a bit before you see the most obvious plan of attack. Next time I will, and thus, SO has made me a better person today :) – k00k Jul 10 '12 at 17:43

2 Answers2

4

According to the PDF spec, whenever a PDF creator writes out a modified PDF, it should update the key named /ModDate in the /Info array of metadata entries.

Also, it will (likely) change the document UUID in the PDF's XMP metadata structure to a new ID.


So, when you want to use MD5 (or any similar method) to check for 'stable results' in your PDF generation processes (think of unit tests or whatever), you should do one of these two things before applying your MD5-summing:

  • either 'normalize' your PDF output to always write the same ModDate and UUID into the files (if your PDF-generating facilities allows you to tweak it that way),
  • or run an edit (you can use sed) over the files that normalizes the /ModDate (and possibly also the /CreationDate) and UUID entries of the files.

Update: Since you seem to be familiar with pdftk already, you should be able to dump a metadata text file (like Ezra showed):

pdftk in.pdf dump_data output data.txt

or (in case you need it):

pdftk in.pdf dump_data_utf8 output data.utf8.txt

Then edit the data*.txt files to make them suite your needs: change the PDF UUIDs (pdftk calls them PdfID0 / PdfID1) to easily recognizable values (00000... and fffff...), change the dates to another easily recognizeable one. Then update your files with these metadata values:

pdftk in.pdf update_info data.txt output in-updated.pdf \
      &&  mv in-updated.pdf in.pdf

or

pdftk in.pdf update_info data.utf8.txt output in-updated.utf8.pdf \
      &&  mv in-updated.utf8.pdf in.pdf 

Only then run your Md5 checksumming and see if it works (or needs some more fine-tuning).

Kurt Pfeifle
  • 86,724
  • 23
  • 248
  • 345
  • 2
    Finally getting back to work on this project and one thing I've found is that even though I edit the `PdfID0` & `PdfID1`, they get automatically generated when you do `update_info`. So now I'm trying to strip them from the file some other way, maybe `sed`. I'll update here when I figure it out. – k00k Sep 10 '12 at 20:24
2

The glib answer is that the checksums because the data is different.

An experiment that I confirmed this:

First, I burst a pdf and move the file:

$pdftk Michael-Jordan-I-Cant-Accept-Not-Trying.pdf burst
$md5sum pg_0001.pdf 
150ef33eec73cd13c957194ebead0e93  pg_0001.pdf
$mv pg_0001.pdf 150ef33eec73cd13c957194ebead0e93

Next, I burst the same pdf again, again moving the file:

$pdftk Michael-Jordan-I-Cant-Accept-Not-Trying.pdf burst
$md5sum pg_0001.pdf
49c7c885bc516856f4316452029e0626  pg_0001.pdf
$mv pg_0001.pdf 49c7c885bc516856f4316452029e0626

This confirmed your finding; the sums are different. Upon inspection, it is bytes 91411-92163 that differ.

My gut told me that this was date metadata, and I confirmed this thusly:

$pdftk 150ef33eec73cd13c957194ebead0e93 dump_data output 150.txt
$pdftk 49c7c885bc516856f4316452029e0626 dump_data output 49c.txt
$diff -u 150.txt 49c.txt 
--- 150.txt 2012-07-10 11:08:02.371119999 -0600
+++ 49c.txt 2012-07-10 11:08:18.891201910 -0600
@@ -3,9 +3,9 @@
 InfoKey: Producer
 InfoValue: itext-paulo-155 (itextpdf.sf.net-lowagie.com)
 InfoKey: ModDate
-InfoValue: D:20120710105934-06'00'
+InfoValue: D:20120710110010-06'00'
 InfoKey: CreationDate
-InfoValue: D:20120710105934-06'00'
-PdfID0: 51671a1a6c4f5e6bb81b88fc7efd14d0
-PdfID1: 82fd646061863972216ccf8a32cf3c7b
+InfoValue: D:20120710110010-06'00'
+PdfID0: 844f34f87275b9184ebe10b82d3397c9
+PdfID1: 8f555a30216e37d77abaf03a4217b2a
 NumberOfPages: 1

I'm not sure what your problem is, but if you really need matching sums, two obvious approaches are:

  1. Setting the dates to be the same.
  2. Only using the first N bytes to calculate the sum; omitting the problematic metadata.
Ezra
  • 7,552
  • 1
  • 24
  • 28
  • +1 -- but it's not guaranteed that the 'problematic' metadata always appears after the first N bytes. Depending on the PDF generating software, it may appear even close to the very beginning of the file. The PDF spec allows that. – Kurt Pfeifle Jul 10 '12 at 18:14