0

I am using the CRC32 calculation unit of Nucleo L053R8 to calculate a checksum of a data buffer where the data stream input is in bytes. In the exemple project provided by ST they are using a data buffer of 4-bytes length elements with the following configuration of the CRC handler:

CrcHandle.Instance = CRC; 

CrcHandle.Init.DefaultPolynomialUse    = DEFAULT_POLYNOMIAL_ENABLE;
CrcHandle.Init.DefaultInitValueUse     = DEFAULT_INIT_VALUE_ENABLE;
CrcHandle.Init.InputDataInversionMode  = CRC_INPUTDATA_INVERSION_NONE;
CrcHandle.Init.OutputDataInversionMode = CRC_OUTPUTDATA_INVERSION_DISABLE;
CrcHandle.InputDataFormat              = CRC_INPUTDATA_FORMAT_WORDS;

and after initialisation the following function is used to calculate the CRC:

/**
  * @brief  Compute the 7, 8, 16 or 32-bit CRC value of an 8, 16 or 32-bit data buffer
  *         starting with the previously computed CRC as initialization value.
  * @param  hcrc CRC handle
  * @param  pBuffer pointer to the input data buffer, exact input data format is
  *         provided by hcrc->InputDataFormat.
  * @param  BufferLength input data buffer length (number of bytes if pBuffer
  *         type is * uint8_t, number of half-words if pBuffer type is * uint16_t,
  *         number of words if pBuffer type is * uint32_t).
  * @note  By default, the API expects a uint32_t pointer as input buffer parameter.
  *        Input buffer pointers with other types simply need to be cast in uint32_t
  *        and the API will internally adjust its input data processing based on the
  *        handle field hcrc->InputDataFormat.
  * @retval uint32_t CRC (returned value LSBs for CRC shorter than 32 bits)
  */
uint32_t HAL_CRC_Accumulate(CRC_HandleTypeDef *hcrc, uint32_t pBuffer[], uint32_t BufferLength)

As my input data is of 1-byte length and I have my own polynomial and init value I have used the following configuration:

CrcHandle.Instance                     = CRC; 
CrcHandle.Init.DefaultPolynomialUse    = DEFAULT_POLYNOMIAL_DISABLE;
CrcHandle.Init.DefaultInitValueUse     = DEFAULT_INIT_VALUE_DISABLE;
CrcHandle.Init.GeneratingPolynomial    =    0x80032DB; 
CrcHandle.Init.InitValue               =    0x55555500;
CrcHandle.Init.CRCLength               =    CRC_POLYLENGTH_32B;
CrcHandle.Init.InputDataInversionMode  = CRC_INPUTDATA_INVERSION_NONE;
CrcHandle.Init.OutputDataInversionMode = CRC_OUTPUTDATA_INVERSION_DISABLE;
CrcHandle.InputDataFormat              = CRC_INPUTDATA_FORMAT_BYTES;

And I have casted the input data to uint32_t However, The result is always 0 no matter the input data. What could be the problem ?

glue
  • 85
  • 6
Abyr
  • 117
  • 1
  • 3
  • 14
  • Try `uint8_t pBuffer[]` as a parameter. You might want to try using a 4 byte buffer set to {0xff, 0xff, 0xff, 0xff} as a test case. Is there an initialization call that needs to be made, perhaps so the code can generate a crc table to speed up the calculation? – rcgldr May 10 '19 at 10:59
  • in fact the problem was that I commented the peripheral initialization code and didn't pay attention. Now I have a value but how can I make sure it is the correct value ? – Abyr May 10 '19 at 12:32
  • Based on your prior question, the 24 bit CRC shifted left 8 bits to operate with the 32 bit CRC code should be 0x00065b00, If you're using a different CRC, the one shown in your question, it should probably be 0x0032db00 . – rcgldr May 11 '19 at 13:38
  • yes the crc is the same as the one in my previous question(g(X)=x^24 + x^10 + x^9 + x^6 + x^4 + x^3 + x + 1) and I am referring to this : https://www.mathworks.com/help/comm/ref/generalcrcgenerator.html to do the representation then I performed a binary to hex conversion online is there anything wrong why do u think i have a wrong result ? – Abyr May 11 '19 at 19:14
  • 1
    The X^24 term isn't used for CRC calculation, that means the CRC has seven 1 bits in it => 0x00065b00, while 0x32db00 has nine 1 bits. I'm not sure where you got that number from. Splitting up the 24 bit components, x^10 = 0x400, x^9 = 0x200, x^6 = 0x40, x^4 = 0x10, x^3 = 0x8, x = 0x2, 1 = 0x1, add (or xor) them up to get 0x65b. Shift left 8 bits = 0x00065b00 . – rcgldr May 11 '19 at 19:19
  • I do not understand why X^24 isn't used for CRC calculation as for the method you are stating it is different from the one in the previous link and gives a different result can u share a source please I am not sure which one is the correct one – Abyr May 12 '19 at 13:41
  • 1
    Not using the x^24 term is a software optimization that allows 24 bit math, in this case, the upper 24 bits of a 32 bit variable to be used to generate the CRC. Each "division" step is going to zero out the X^24 term in the remainder, so it is not needed when doing CRC calculation. If you look at the mathworks circuit example, note that there are only 3 flip flops (1 bit registers), calling them a, b, c, they represent the remainder polynomial ax^2 + bx + c. The x^3 term is the feed back line going into the xor gates each time the circuit is cycled, and never sent to the output mux. – rcgldr May 12 '19 at 14:32

0 Answers0