I am working on a homework problem related to presentation formatting, and I was hoping to get pointed towards the right direction. The homework problem is:
Different architectures have different conventions on bit order as well as byte order - whether the least significant bit of a byte, for example, is bit 0 or bit 7. RFC 791 [Pos81]
defines (in its Appendix B) the standard network bit order. Why is bit order then not relevant to presentation formatting?
I'm confused at how bit order would not be relevant to presentation formatting. Presentation formatting encodes the data so it can travel through the network, and then decodes the data at the receiving end so the data can be used. But I know the decoding process has to take into account the architecture of the receiving computer in order for the data to be interpreted correctly, this includes bit and byte order. In fact, that is what these conversion strategies such as canonical intermediate form and receiver-makes-right are for - they make sure the data is converted into a format that will allow it to travel through the network and also be interpreted by the receiving host application.
My idea of what the answer could be is that presentation decoding occurs in the application layer, because only the application knows what format it wants the data to be in. Thus, the receiving host's architecture is not something presentation formatting worries about because it knows that it will take care of converting the data.
Is there something I'm misunderstanding?
Thanks for the help.