1

in InChi library (available here: http://www.iupac.org/home/publications/e-resources/inchi.html) there is a custom implementation of SHA-2 algorithm which (implementation, not an algorithm) I'm trying to understand. In particular one short fragment of code is really confusing:

#define PUT_UINT32_BE(n,b,i)                            \
{                                                       \
    (b)[(i)    ] = (unsigned char) ( (n) >> 24 );       \
    (b)[(i) + 1] = (unsigned char) ( (n) >> 16 );       \
    (b)[(i) + 2] = (unsigned char) ( (n) >>  8 );       \
    (b)[(i) + 3] = (unsigned char) ( (n)       );       \
}
#endif

This macro is used in this context:

unsigned char msglen[8];
low  = ( ctx->total[0] <<  3 );
PUT_UINT32_BE( low,  msglen, 4 );

Problem is that total is defined as a table of long:

unsigned long total[2];     /*!< number of bytes processed  */

So now, if total saves the number of bytes processed it's very likely that total[0] can be greater than 256 (that's probably the reason why it's defined as long), so I don't know what would be the effect of casting this long to unsigned char in PUT_UINT32_BE macro? Would that get fist x bytes or last bytes or total[0] % 256?

mnowotka
  • 16,430
  • 18
  • 88
  • 134

2 Answers2

1

The macro simply puts the single 8-bit entities of a 32-bit value into a byte-array, starting from some offset. This is done with the shift operation: The first shift get the top 8 bits (bits 24 to 31), the next shift get bits 16 to 23, then bits 8 to 15, and lastly bits 0 to 7.

If you do the opposite, i.e. get the bytes from the array and put them together as a 32-bit value you will get your original value back.

Some programmer dude
  • 400,186
  • 35
  • 402
  • 621
0

why not use simple memory dereferncing? For example:

* (int*) &(b[(i)] = n;
slm
  • 15,396
  • 12
  • 109
  • 124