-1

I was studying error detection in computer networks and I came to know about the following methods -

  1. Single Bit parity Check
  2. 2d parity Check
  3. Checksum
  4. Cyclic Redundancy Check

But after studying only a bit (lmao pun), I came across cases where they fail.

The methods fail when -

  1. Single Bit parity Check - If an even number of bits has been inverted.

  2. 2d parity Check - If an even number of bits are inverted in the same position.

  3. Checksum - addition of 0 to a frame does not change the result, and sequence is not maintained. (for e.g. in the data - 10101010 11110000 11001100 10111001 if we add 0 to any of the four frames here)

  4. CRC - A CRC of n-bit for g(x) = (x+l)*p(x) can detect: All burst errors of length less than or equal to n.

      All burst errors affecting an odd number of bits.
    
      All burst errors of length equal to n + 1 with probability (2^(n-1) − l)/2^n − 1
    
      All burst errors of length greater than n + 1 with probability (2^(n-1) − l)/2^n 
      [the CRC-32 polynomial will detect all burst errors of length greater than 33 with 
      probability (2^32 − l)/2^32; This is equivalent to a 99.99999998% accuracy rate]    
    

    Copied from here - https://stackoverflow.com/a/65718709/16778741

As we can see these methods fail because of some very obvious shortcomings.

So my question is - why were these still allowed and not rectified and what do we use these days?

Its like the people who made them forgot to cross check

2 Answers2

1

It is a tradeoff between effort and risk. The more redundant bits are added, the smaller the risk of undetected error.

Extra bits mean additional memory or network bandwidth consumption. It depends on the application, which additional effort is justified. Complicated checksums add some computational overhead as well.

Modern checksum or hash functions can drive the remaining risk to very small ranges tolerable for the vast majority of applications.

Axel Kemper
  • 10,544
  • 2
  • 31
  • 54
  • 1
    "*It is a tradeoff ...*" -- You need to also factor in the probability of flipped bit(s) using that medium. A single bit error in a UART frame is somewhat low probability, and a 2-bit error far less likely (for slow baudrates). Hence parity detection used to be sufficient. As NAND flash reliability (or production quality?) has decreased over the past decade or so, the ECC requirements specified by the manufacturers have gone up . Old datasheets for popular NAND chips specified 1 or 2 bits of correctability for reliable operation; Today's chips require 4, 8, or more bits of ECC capability. – sawdust Jul 30 '22 at 04:30
1

Only 0.00000002% of burst errors will be missed. But what is not stated is the likelihood of these burst errors occurring. That number is dependent on the network implementation. In most cases the likelihood of a undetectable burst error will be very close to zero or zero for an ideal network.

Multiplying almost zero with almost zero is really close to zero.

Undetected errors in CRCs is more of academic interest than practical reality.

Gerhard
  • 6,850
  • 8
  • 51
  • 81