I’m still working on a properly functioning ICMP listener and puzzling how to deal with ‘endianness’. My first approach was getting the data out of the buffer into an appropriate (array initially) field (uint etc.) and then reversing the byte order, before passing the data on to the appropriate BitConverter member. Though it works, it is not very elegant.
Second approach was to prepare the entire buffer driven by a two dimensional array, containing the position and length of fields, that need to be adjusted. That works too, with an added advantage that existing code needs no changes, but it lacks (severely) in readability.
I am making an effort to understand the new programming environments available today (specifically .Net and C#), and I am very surprised that issues that existed back in 1975 are still not appropriately dealt with. I had one of the first (ISA) network cards (targeted for Ethernet 1.0 and based on a Motorola 68000 CPU) developed for a (IBM) PC and I clearly remember the issues involved; not just multi word base types, but all data was in different ‘endiannnes’ (then 16 bit words), effectively killing the possibility to ‘DMA’ data.
I would consider it to be a very small effort on the part that a network adapter plays, to adjust data appropriate to its environment, but it seems to be not too big an issue to resolve it that way. (?)
Bottom line; I can’t believe that I’m the only one burdened with this and I sincerely hope that someone has come up with a better solution than the ones I’m using now.
(this text it translated with the help of Google translate (as I’m Dutch))