Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

CRC bit-order confusion

I'm calculating a CCITT CRC-16 bit by bit. I do this that way because it's a prototype that later should be ported to VHDL and end up in hardware to check a serial bit-stream.

On the net I found a single bit CRC-16 update step code. Wrote a test-program and it works. Except for one strange thing: I have to feed the bits of a byte from lowest to highest bit. If I do it this way, I get correct results.

In the CCITT definition of CRC-16 the bits should be feed highest bit to lowest bit though. The data-stream that I want to calculate the CRC from comes in this format as well, so my current code is kind of useless for me.

I'm confused. I would have not expected that feeding the bits the wrong way around could work at all.

Question: Why is it possible that a CRC can be written to take the data in two different bit-orders, and how do I transform my single bit update code that it accepts the data MSB first?

For reference, here is the relevant code. Initialization and the final check have been removed to keep the example short:

typedef unsigned char bit;

void update_crc_single_bit (bit * crc, bit data)
{
  // update CRC for a single bit:
  bit temp[16];
  int i;

  temp[0] = data ^ crc[15]; 
  temp[1] = crc[0]; 
  temp[2] = crc[1]; 
  temp[3] = crc[2]; 
  temp[4] = crc[3]; 
  temp[5] = data ^ crc[4] ^ crc[15]; 
  temp[6] = crc[5]; 
  temp[7] = crc[6]; 
  temp[8] = crc[7]; 
  temp[9] = crc[8]; 
  temp[10] = crc[9]; 
  temp[11] = crc[10]; 
  temp[12] = data ^ crc[11] ^ crc[15]; 
  temp[13] = crc[12]; 
  temp[14] = crc[13]; 
  temp[15] = crc[14];

  for (i=0; i<16; i++)
    crc[i] = temp[i];
}

void update_crc_byte (bit * crc, unsigned char data)
{
  int j;
  // calculate CRC lowest bit first
  for (j=0; j<8; j++)
  {
    bit b = (data>>j)&1;
    update_crc_single_bit(crc, b);
  }
}

Edit: Since there is some confusion here: I have to compute the CRC bit by bit, and for each byte MSB first. I can't simply store the bits because the code shown above is a prototype for something that will end up in hardware (without memory).

The code shown above generates the correct result if I feed in a bit-stream in the following order (shown is the index of the received bit. Each byte gets transmitted MSB first):

|- first byte -|-   second byte     -|-  third byte 
7,6,5,4,3,2,1,0,15,14,13,12,11,10,9,8,....

I need the single update loop to be transformed that it generates the same CRC using natural order (e.g. as received):

|- first byte -|-   second byte     -|-  third byte 
0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,.... 
like image 277
Nils Pipenbrinck Avatar asked Dec 06 '25 17:12

Nils Pipenbrinck


1 Answers

If you look at the RevEng 16-bit CRC Catalogue, you see that there are two different CRCs called "CCITT", one of which is labeled there "CCITT-False". Somewhere along the way someone got confused about what the CCITT 16-bit CRC was, and that confusion was propagated widely. The two CRCs are described thusly, with the first one (KERMIT) being the true CCITT CRC:

KERMIT

width=16 poly=0x1021 init=0x0000 refin=true refout=true xorout=0x0000 check=0x2189 name="KERMIT"

and

CRC-16/CCITT-FALSE

width=16 poly=0x1021 init=0xffff refin=false refout=false xorout=0x0000 check=0x29b1 name="CRC-16/CCITT-FALSE"

You will note that the real one is reflected, and the false one is not, and there is another difference in the initialization. In reflected CRCs, the lowest bit of the data is processed first, so it appears that you are trying to compute the true CCITT CRC.

When the CRC is reflected, so is the order of the bits in the polynomial that is exclusive-ored into the register, so 0x1021 becomes 0x8408. Here is a simple C implementation that you can check against:

#include <stddef.h>

#define POLY 0x8408

unsigned crc16_ccitt(unsigned crc, unsigned char *buf, size_t len)
{
    int k;

    while (len--) {
        crc ^= *buf++;
        for (k = 0; k < 8; k++)
            crc = crc & 1 ? (crc >> 1) ^ POLY : crc >> 1;
    }
    return crc;
}

I don't know what you mean by "In the CCITT definition of CRC-16 the bits should be feed highest bit to lowest bit though". What definition are you referring to?

In this Altera document, you can see the shift register implementation of the CRC for a hardware implementation. Here is a copy of the diagram:

CCITT CRC shift register with taps at register 16, 10, and 3

For your code, you need to reverse your register, temp[], indices. temp[0] is temp[15] and so on.

like image 104
Mark Adler Avatar answered Dec 08 '25 08:12

Mark Adler



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!