2

I need to encrypt long lived network data streams using AES-CBC. I was thinking I would call EVP_EncryptInit_ex() just once and save the EVP_CIPHER_CTX for subsequent calls to EVP_EncryptUpdate. Then do likewise on the decrypt end. The first problem I discovered is EVP_DecryptUpdate will always be one block behind. E.g., if I encrypt 32 bytes, the 1st decrypt update will return only 16, even though I know it has decrypted all 32 bytes. I guess this means I need to call EVP_DecryptFinal after every EVP_DecryptUpdate, and then EVP_EncryptInit_ex() to reset the iv before the next update.

A second concern is that I may have many 1000's of these streams, and am trying to minimize the memory footprint. sizeof(EVP_CIPHER_CTX) is only 168 bytes, but if I query memory usage before and after 1000 calls to EVP_EncryptInit_ex(), it looks like it allocates an additional 412 bytes per context (this is on top of 20K after first call).

CORRECTION, I see 412 bytes per CTX not 168 + 412

The AES_cbc_encrypt() interface looks much better for my needs. There is a fixed 260 byte AES_KEY structure, plus I need to maintain the 16 byte IV myself. However, from what I understand, it does not use the AES-NI Intel hardware acceleration. https://security.stackexchange.com/questions/35036/different-performance-of-openssl-speed-on-the-same-hardware-with-aes-256-evp-an Is there a way to enable AES-NI on the AEC_cbc_encrypt() interface? Is the 2X memory requirement of EVP not just a side effect of the API, but necessary to get the speed improvement? Is there a good alternative to OpenSSL that uses AES-NI?

Badda
  • 1,329
  • 2
  • 15
  • 40
user1055568
  • 1,349
  • 13
  • 21
  • Strangely enough i spent the last few hours trying to answer myself that same question. I tend toward the aes.h as of now, the evp.h is more universal and would be better if you plan to support different encryption algorithms in the future. In my case that's irrelevant, and i feel awkward about it's memory management – kalinrj Jul 14 '15 at 17:42
  • I agree, but I would like to use AES-NI. I had been using the Brian Gladman source for many years, and see that it now supports AES-NI. He is oriented toward Windows build environment, but I got aes_ni.c to build and run without too much effort on Linux. That has fixed 256 byte context per key. Unfortunately, I can't figure out how to build on OS X where I like to do development. – user1055568 Jul 14 '15 at 19:51
  • *"I need to encrypt long lived network data streams..."* - by definition, network data is *not* long lived. It is bound and it has a finite lifetime. Data in persistent storage in unbound, and has an infinite lifetime. (And that's really the defining difference between a network packet and a filesystem file). – jww Jul 14 '15 at 22:02
  • By "long lived" I meant that the encryption process is long lived relative to a one shot encryption of a file. – user1055568 Jul 15 '15 at 00:29

1 Answers1

0

Is there a way to enable AES-NI on the AEC_cbc_encrypt() interface?

No. AES_encrypt is a software only implementation. It will never use hardware acceleration.

Also, the OpenSSL project tells you don't use AES_encrypt and friends. Rather, they tell you to use EVP_encrypt and friends.


Is the 2X memory requirement of EVP not just a side effect of the API, but necessary to get the speed improvement?

Its hard to say because I've never profiled it. But what does it matter? If you need to do X, then you don't really have a choice within OpenSSL. Here, X is perform authenticated encryption with EVP interfaces.


Is there a good alternative to OpenSSL that uses AES-NI?

Its hard to say. Maybe you could articulate your requirements, and then ask on Programmers Stack Exchange. That's the place to ask high level design questions.

Community
  • 1
  • 1
jww
  • 97,681
  • 90
  • 411
  • 885
  • The OpenSSL project says to use the EVP interfaces so it will be easier to switch algorithms, which is of no interest to me. The connection with hardware acceleration was mentioned more obscurely, and one comment suggested (now I know incorrectly) that it could be manually enabled on the direct algorithms. – user1055568 Jul 15 '15 at 00:38