I have a C program which uses libgcrypt library to measure encryption and decryption speeds for AES256 CBC algorithm. The program works in this way: tries to decrypt/encrypt as much data as it could in 3 seconds. When done it calculates the overall size of processed bytes and translates it to speed in Mbytes/sec. For each decrypt/encrypt operation chunks of 256 bytes are used.
I have found that the decryption process can process much more data in the same amount of time compared to the encryption process. In fact, the decryption process can decrypt up to 3 times more data than the encryption process in the same amount of time.
Tried the same code on another machine - got the same results.
On my RPI device, however, speeds for both crypto operations are pretty much the same.
Here is a sample code that I use to test the performance of the encryption and decryption processes:
enum op {
ENC = 0,
DEC
};
void measure_crypto_performance(enum op op)
{
#define BUFFER_SIZE 256
#define TEST_TIME 3
/* Initialize the library */
gcry_check_version(NULL);
/* Set the decryption algorithm */
gcry_cipher_hd_t handle;
gcry_cipher_open(&handle, GCRY_CIPHER_AES256, GCRY_CIPHER_MODE_CBC, 0);
/* Set the decryption key */
char key[32];
for (int i = 0; i < 32; ++i)
key[i] = i;
gcry_cipher_setkey(handle, key, sizeof(key));
/* Set the decryption initialization vector */
char iv[16] = "0123456789abcdef";
/* Set up the buffer for the ciphertext and plaintext */
char plaintext[BUFFER_SIZE];
char ciphertext[BUFFER_SIZE];
char decrypted_data[BUFFER_SIZE];
gcry_randomize((void *)plaintext, BUFFER_SIZE, GCRY_STRONG_RANDOM);
if (op == DEC)
{
gcry_cipher_setiv(handle, iv, sizeof(iv));
gcry_cipher_encrypt(handle, ciphertext, BUFFER_SIZE, plaintext, BUFFER_SIZE);
}
size_t counter = 0;
time_t start_time = time(NULL);
if (op == ENC)
{
while (time(NULL) - start_time < TEST_TIME) {
gcry_cipher_setiv(handle, iv, sizeof(iv));
gcry_cipher_encrypt(handle, ciphertext, BUFFER_SIZE, plaintext, BUFFER_SIZE);
++counter;
}
} else if (op == DEC)
{
while (time(NULL) - start_time < TEST_TIME) {
gcry_cipher_setiv(handle, iv, sizeof(iv));
gcry_cipher_decrypt(handle, decrypted_data, BUFFER_SIZE, ciphertext, BUFFER_SIZE);
++counter;
}
}
/* Calculate performance */
double elapsed_time = difftime(time(NULL), start_time);
double speed = ((counter * BUFFER_SIZE) / 1000000) / (elapsed_time);
/* Print results */
printf("Op: %s\n", op == DEC ? "Decryption" : "Encryption");
printf("Decrypted %lld bytes in %.2lf seconds\n", counter * BUFFER_SIZE, elapsed_time);
printf("Speed: %.2lf Mbytes/sec\n", speed);
/* Clean up */
gcry_cipher_close(handle);
}
int main(int argc, char **argv)
{
gpt_test(DEC);
gpt_test(ENC);
return 0;
}
My results:
Op: Decryption
Decrypted 9276416000 bytes in 3.00 seconds
Speed: 3092.00 Mbytes/sec
Op: Encryption
Encrypted 3099484416 bytes in 3.00 seconds
Speed: 1033.00 Mbytes/sec
I'm puzzled by this observation and wonder if there's something wrong with my code. I'd appreciate any insights on this.