1

I’m still working on a properly functioning ICMP listener and puzzling how to deal with ‘endianness’. My first approach was getting the data out of the buffer into an appropriate (array initially) field (uint etc.) and then reversing the byte order, before passing the data on to the appropriate BitConverter member. Though it works, it is not very elegant.

Second approach was to prepare the entire buffer driven by a two dimensional array, containing the position and length of fields, that need to be adjusted. That works too, with an added advantage that existing code needs no changes, but it lacks (severely) in readability.

I am making an effort to understand the new programming environments available today (specifically .Net and C#), and I am very surprised that issues that existed back in 1975 are still not appropriately dealt with. I had one of the first (ISA) network cards (targeted for Ethernet 1.0 and based on a Motorola 68000 CPU) developed for a (IBM) PC and I clearly remember the issues involved; not just multi word base types, but all data was in different ‘endiannnes’ (then 16 bit words), effectively killing the possibility to ‘DMA’ data.

I would consider it to be a very small effort on the part that a network adapter plays, to adjust data appropriate to its environment, but it seems to be not too big an issue to resolve it that way. (?)

Bottom line; I can’t believe that I’m the only one burdened with this and I sincerely hope that someone has come up with a better solution than the ones I’m using now.

(this text it translated with the help of Google translate (as I’m Dutch))

  • 1
    Not only you are burdened with this problem :-) See http://stackoverflow.com/questions/217980/c-little-endian-or-big-endian as well. – Vlad Nov 06 '10 at 11:49

2 Answers2

2

Simply; dont use BitConverter if there is a remote chance of Endianness being an issue (which for .NET mainly means "mono on some hardware" AFAIK). Jon Skeet has an EndianBitConverter in "MiscUtil" that may help; otherwise just do the encoding via bit-shifting etc.

Marc Gravell
  • 1,026,079
  • 266
  • 2,566
  • 2,900
0

The Java approach is to just make everything big endian even if the hardware is not. Most other languages require you do the conversion between host and network order. I don't see how you could have the network adapter do it, because it wouldn't know the structure of the data. Only until you get to the application layer do you have all the information needed to do the conversion.

So, yes, we're all stuck doing just what you're doing here.

JOTN
  • 6,120
  • 2
  • 26
  • 31
  • (thanks Vlad and Jotn) I must be stupid; My adapter is in my system and my OS is loaded; yet it (adapter) would fail to recognize its environment? I know it does, but is that acceptible? – Nebbukadnezzar Nov 06 '10 at 13:05
  • The adapter doesn't know the application, and the structure of the data sent over the network is determined by the application. Your application could send little endian data if it wanted but everyone standardizes on big endian. – JOTN Nov 06 '10 at 21:09