in InChi library (available here: http://www.iupac.org/home/publications/e-resources/inchi.html) there is a custom implementation of SHA-2 algorithm which (implementation, not an algorithm) I'm trying to understand. In particular one short fragment of code is really confusing:
#define PUT_UINT32_BE(n,b,i) \
{ \
(b)[(i) ] = (unsigned char) ( (n) >> 24 ); \
(b)[(i) + 1] = (unsigned char) ( (n) >> 16 ); \
(b)[(i) + 2] = (unsigned char) ( (n) >> 8 ); \
(b)[(i) + 3] = (unsigned char) ( (n) ); \
}
#endif
This macro is used in this context:
unsigned char msglen[8];
low = ( ctx->total[0] << 3 );
PUT_UINT32_BE( low, msglen, 4 );
Problem is that total is defined as a table of long
:
unsigned long total[2]; /*!< number of bytes processed */
So now, if total saves the number of bytes processed it's very likely that total[0]
can be greater than 256 (that's probably the reason why it's defined as long
), so I don't know what would be the effect of casting this long
to unsigned char
in PUT_UINT32_BE
macro? Would that get fist x bytes or last bytes or total[0] % 256
?