well the other algorithm is easily translatable to pure C:
uint8_t calculate(byte[] bytes) {
uint16_t crc = 0xFFFF; // initial value
// loop, calculating CRC for each byte of the string
for (uint8_t byteIndex = 0; byteIndex < bytes.Length; ++byteIndex) {
uint8_t bit = 0x80; // initialize bit currently being tested
for (uint8_t bitIndex = 0; bitIndex < 8; ++bitIndex) {
bool xorFlag = ((crc & 0x8000) == 0x8000);
crc <<= 1;
if (((bytes[byteIndex] & bit) ^ (uint8_t)0xff) != (uint8_t)0xff)
crc = crc + 1;
if (xorFlag)
crc = crc ^ 0x1021;
bit >>= 1;
}
}
return (uint8_t)crc;
}
the only difference being the use of the stdint.h
types.
I also changed the type of crc to be exactly a 16 bits unsigned, and for the
indexes, only for sparing some arduino memory which is precious (every byte counts
when you got only 2.5k of RAM! :-) )
Though, I did neither test or proof read that code, so it should be as good as the
original C# one. If it's buggy, that one will be as well.
EDIT: As the OP added in comment, this resource is a good explanation of how the above CRC algorithm work: http://www.barrgroup.com/Embedded-Systems/How-To/CRC-Calculation-C-Code.
HTH