0

I'm using the code of this entry (targetting BouncyCastle 1.54), but instead of passing the bytes of an small string, I'm passing it a 1.7GB file. The only one line I changed was:

// CMSTypedData msg = new CMSProcessableByteArray(signature.sign()); wrong line
CMSTypedData msg = new CMSProcessableByteArray(text.getBytes()); 

for:

CMSTypedData msg = new CMSProcessableFile(new File("D:\\season4_mlp.rar"));

and it works :D it generates a valid detached signature. As long as I'm working with the raw bytes, I didn't need the Base64 encoding, so no problematic Sun imports needed. but I measured the time it takes to sign my large file, and it is between 15 to 22 seconds (JVM should be doing some optimizations between runs).

Then I used the code in this post, with the solution given by Peter Dettman, and declaring the gen object as follows (except for the change from CMSSignedDataGenerator to CMSSignedDataStreamGenerator, it is a copy/paste from this):

CMSSignedDataStreamGenerator gen = new CMSSignedDataStreamGenerator();
gen.addSignerInfoGenerator(new JcaSignerInfoGeneratorBuilder(new JcaDigestCalculatorProviderBuilder().setProvider("BC").build()).build(sha1Signer, cert));
gen.addCertificates(certs);

and passing false in the line

OutputStream sigOut = gen.open(bOut, false); 

Now it takes between 19 to 21 seconds to sign my file, and the signature is valid too.

In general, I tested both classes CMSSignedDataGenerator and CMSSignedDataStreamGenerator, with Netbeans 8.0.2, targetting Java 8, on a i3 330GHz intel Win7 x64 SP1 machine and 4GB ram, several other apps opened, ending up with similar results.

I want to know if there's an optimized/faster way to sign large files with BC. To me, 20 seconds for processing an 1.7GB file sounds fine, but in an production environment It will be desirable to find a faster solution.

Ram consumption is not an issue, I checked Netbeans consuption and it went from 700 Mb to almost 800 Mb. Firefox is eating more ram by now. I'm concerned about signing speed.

Community
  • 1
  • 1
Broken_Window
  • 2,037
  • 3
  • 21
  • 47
  • Have you tested how fast the file can be read into the operative memory from the disk? I believe that most of the time is spent on IO. Also JVM can't make optimizations between runs, so most likely your OS/HDD just cached the part of the file, which caused performance boost. – user3707125 Mar 03 '16 at 23:37

1 Answers1

0

Following user3707125 comment, the advices from this web and code copied from elsewhere on internet, and after testing other codes and java classes, I come out with this code to calculate the hash of a large file:

byte[] buf = new byte[8192];
MessageDigest sha =  MessageDigest.getInstance("SHA1");

FileInputStream inp = new FileInputStream(new File("D:\\season4_mlp.rar"));

int n;
while((n = inp.read(buf)) > 0)
    sha.update(buf, 0, n);

byte hash[] = sha.digest();

It takes around 20 to 22 seconds to process my 1.7 GB file.

It seems that BC makes several optimizations internally to get a timing close to mine (I haven't peruse its source code). So, as far as I know, this is the best performance I can get.

Broken_Window
  • 2,037
  • 3
  • 21
  • 47