-1

I have two bitfields: (1) one to handle the frame (header), (2) the other to handle a subframe within a frame (identityFieldO2M).

union header
{
    unsigned char arr[16]; // 128 bytes allocated

    BitFieldMember<0, 1> SOF;
    BitFieldMember<1, 11> BID;
    BitFieldMember<12, 1> SRR;
    BitFieldMember<13, 1> IDE;
    BitFieldMember<14, 18> IDEX;
    BitFieldMember<32, 1> RTR;
    BitFieldMember<33, 1> r1;
    BitFieldMember<34, 1> r0;
    BitFieldMember<35, 4> DLC;
    BitFieldMember<39, 8> DataField1;
    BitFieldMember<47, 15> CRC;
    BitFieldMember<62, 1> CRCDelim;
    BitFieldMember<63, 1> ACKSlot;
    BitFieldMember<64, 1> ACKdelim;
    BitFieldMember<65, 7> eof;
};

union identityFieldO2M
{
    unsigned char arr[5]; // 3 bytes allocated though only need 29 bits

    BitFieldMember<0, 2> RCI;
    BitFieldMember<2, 14> DOC;
    BitFieldMember<16, 1> PVT;
    BitFieldMember<17, 1> LCL;
    BitFieldMember<18, 1> FSB;
    BitFieldMember<19, 7> SourceFID;
    BitFieldMember<26, 3> LCC;
};

I need to process first the first bitfield and from there I am combining two members of this bitfield and then running their combined output into another bitfield to determine subframes. The issue I have it, however, is when I do the bitwise function to combine the two bitfields, I am not able to pass this data back into the bitfield.

I think I am doing something "duh" wrong but I am not able to figure this out. Below is my implementation:

    int main()
    {
    header a;
    memset(a.arr, 0, sizeof(a.arr));
    a = {0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0,0xA0}; // 1010 0000

    cout << hex << a.SOF << endl; // 1 -> 1
    cout << hex << a.BID << endl; // 010 0000 1010 -> 20a
    cout << hex << a.SRR << endl; // 0 -> 0
    cout << hex << a.IDE << endl; // 0 -> 0
    cout << hex << a.IDEX << endl; // 00 1010 0000 1010 0000 -> a0a0
    cout << hex << a.RTR << endl; // 1 -> 1
    cout << hex << a.r1 << endl; // 0 -> 0
    cout << hex << a.r0 << endl; // 1 -> 1
    cout << hex << a.DLC << endl; // 0 000 -> 0
    cout << hex << a.DataField1 << endl; // 0 1010 000 -> 50
    cout << hex << a.CRC << endl; // 0 1010 0000 1010 00 -> 2828
    cout << hex << a.CRCDelim << endl; // 0 -> 0
    cout << hex << a.ACKSlot << endl; // 0 -> 0
    cout << hex << a.ACKdelim << endl; // 1 -> 1
    cout << hex << a.eof << endl; // 010 0000 -> 20

    int BID = a.BID;
    int IDEX = a.IDEX;    
    int result = (BID<<18) | IDEX; // concatenate BID and IDEX together to get 29 bit header

    cout << "test" << endl;
    cout << "BID: " << hex << BID << endl; //-> 20a -> 010 0000 1010
    cout << "IDEX: " << hex << IDEX << endl; //-> a0a0-> 00 1010 0000 1010 0000
    cout << "Identifier Field: " << hex << result << endl; //-> 828a0a0 -> 01 0000 0101 0001 01 0 0 0 0010100 000    
    cout << "Size of Bitfield header: " << sizeof(a) << endl;

    identityFieldO2M b;
    b = result; // **error: no match for 'operator=' (operand types are 'identityFieldO2M' and 'int')**
    memset(b.arr,0,sizeof(b.arr));

    cout << hex << b.RCI << endl; // 01 -> 0x01
    cout << hex << b.DOC << endl; // 0000 0101 0001 01 -> 0x145
    cout << hex << b.PVT << endl; // 0 -> 0x00
    cout << hex << b.LCL << endl; // 0 -> 0x00
    cout << hex << b.FSB << endl; // 0 -> 0x00
    cout << hex << b.SourceFID << endl; // 0010100 -> 0x14
    cout << hex << b.LCC << endl; // 000 -> 0 -> 0x00

    sleep(100);
    return 0;
    }

where the error happens when I am setting the result of concatenate BID and IDEX to struct b:

identityFieldO2M b;
b = result; // **error: no match for 'operator=' (operand types are 'identityFieldO2M' and 'int')**
memset(b.arr,0,sizeof(b.arr));

For the BitFieldMember template I am using this very helpful template here: https://codereview.stackexchange.com/questions/54342/template-for-endianness-free-code-data-always-packed-as-big-endian

Looking through the definition of the BitFieldMember template, here is the operand = to assign values into the field, where I suspect this issue may lay?

/* used to assign a value into the field */
inline self_t& operator=(unsigned m)
{
    uchar *arr = selfArray();
    m &= mask;
    unsigned wmask = ~(mask << (7 - (lastBit & 7)));
    m <<= (7 - (lastBit & 7));
    uchar *p = arr + lastBit / 8;
    int i = (lastBit & 7) + 1;
    (*p &= wmask) |= m;
    while (i < bitSize)
    {
        m >>= 8;
        wmask >>= 8;
        (*(--p) &= wmask) |= m;
        i += 8;
    }
    return *this;
}
SamJ
  • 43
  • 7
  • 2
    `identityFieldO2M` is not defined in the code you posted. Obviously it does not admit being assigned an `int`. Please post a [MCVE](http://stackoverflow.com/help/mcve) – M.M Feb 28 '18 at 22:54
  • Post your bitfield definitions. We can't help you without at least that. – Justin Randall Feb 28 '18 at 23:03
  • Sorry! I have updated and posted both bitfield definitions. I noticed even when attempting to pass the (currently static) hex value of 828a0a0 this is also giving errors related to sizing. e.g. b= {828a0a0}; gives the error: narrowing conversion of '136880288' from 'int' to 'unsigned char' inside { } – SamJ Feb 28 '18 at 23:06
  • I've tried casting in few different ways as well, and why I cannot just set it statically as a hex value I am not entirely sure since I did that before. Maybe it has something to do with the hex value I am attempting to set being 29 bits whereas 0xA0 being 8 bits? – SamJ Feb 28 '18 at 23:15
  • I expect your problem is that you have not defined any match for 'operator=' (operand types are 'identityFieldO2M' and 'int') – user253751 Feb 28 '18 at 23:15
  • I've added the operator= that is defined in the BitFieldMember template I am using. I think this may be the source of the problem? For ease, i've also put in a full set of the source code here: https://gist.github.com/anonymous/811851028d1fe73e6779ca25ad31c05d Thanks for help in advance. – SamJ Feb 28 '18 at 23:44

1 Answers1

0

According to your bitfield definition, you need to use an unsigned char[5] and it's expecting big endian values (from following the link to where you got this code). You cannot assign it to an int as you have seen from your compiler error. One option is to copy your int value into your big endian b.arr through bit shifting. Something like this should do it.

int result = 0x0828a0a0;
identityFieldO2M b;

b.arr[0] = (result >> 24) & 0xFF; // 0x08
b.arr[1] = (result >> 16) & 0xFF; // 0x28
b.arr[2] = (result >> 8) & 0xFF;  // 0xa0
b.arr[3] = result & 0xFF;         // 0xa0
Justin Randall
  • 2,243
  • 2
  • 16
  • 23
  • Thanks Justin! one issue, however, when checking the output this doesn't match what I expect: 0 0 0 0 0 2 4 where i was expecting: 1 145 0 0 0 14 0 If I just try a simple array seperating 0x0828a0a0 into instead { 0x41, 0x45, 0x05, 0x00 }; this will compile and the result is what I expect, but I am not sure how I can set the array to look like this? – SamJ Feb 28 '18 at 23:56
  • @SamJ So then either there is something wrong with your bitfield definitions or the integer you are trying to populate it from. – Justin Randall Feb 28 '18 at 23:59
  • @SamJ huh? In network byte order (big endian) `int foo = 0x0828a0a0;` then the "equivalent" `unsigned char foo[4] = { 0x08, 0x28, 0xa0, 0xa0 };` so where in the world are you getting `{ 0x41, 0x45, 0x05, 0x00 };` from? – Justin Randall Mar 01 '18 at 00:02
  • @SamJ If you want `identityFieldO2M b;` to be that value, then just set it to what you want. `b = { 0x41, 0x45, 0x05, 0x00 };` Just like you did with `a` in the previous example. – Justin Randall Mar 01 '18 at 00:09
  • I understand, I think I was getting confused as there are as the amount of bits being represented by 0x08, 0x28, 0xa0, 0xa0 is 26 where I was expecting 29 bits `// 01 0000 0101 0001 01 0 0 0 0010100 000 // 1 000 101 0001 01 0 0 0 0010100 000` (showing expected on top, result on bottom). Is there anyway to preserve the leading and trailing 0s? Apologies if this is a dumb question. I cannot use the array of static hex values as this will be dynamically generated eventually and it will have all those padded 0's, only using it currently for testing. – SamJ Mar 01 '18 at 00:42
  • Sorry to explain the intention, a 64 to 128 bit frame is being sent to me which I am parsing in the first struct. I am taking two fields of that strut, 11 and 18 bits in length, to combine them into a 29 bit field which has a definition of subfields within it. In real time, when I am actually going to pass it live data being read, that field is going to be 29 bits long with padded 0's. I am only using the static hex values at the current time to test that I can map those 29 bits into the proper bitfield members for further use. So 828a0a0 is just an example of 29 bits I am using to test. – SamJ Mar 01 '18 at 00:50
  • @SamJ like I said before, if you’re getting something that you don’t expect (missing bits in this case) then either your computed integer value or your bitfield definitions themselves are wrong. It looks close but those missing zeros cannot be explained by a simple shifting error alone. Keep at it. – Justin Randall Mar 01 '18 at 01:23
  • @SamJ Keep in mind that an integer is 32 bits in this case, and if you only care about 29 of them, then that means 3 of them are “spare”. If all “spare” bits come at the beginning or end, then it’s easy to shift them off if needed. If they are sprinkled about in between bitfields you care about, then they must be accounted for and defined as one or more BitFieldMembers in your union. – Justin Randall Mar 01 '18 at 01:29