6

I'm considering to move parts of a .Net application to other computers. The obvious way to do this is simply using WCF with a binary tcp protocol, for example as describer in " Easiest way to get fast RPC with .NET? ".

I'll be making a vast amount of calls and latency is a big issue. Basically one computer will be running a physics simulator and the others will be interacting with it using a API of several hundred commands.

I'm thinking the best way is to make a custom binary protocol where API commands are identified by int16 and a sequence number, and followed by the required parameters. Hardwiring the send and receive classes would remove any unnecessary overhead.

But that is a LOT of work since we are talking several hundred API commands.

Any thoughts on the best way to do implement it?

Edit: To clarify: AFAIK serialization in .Net is not optimized. There is a relatively high penalty in serializing and deserializing objects in for example the internal use of Reflection. This is kind of what I want to avoid, and hence my though around directly mapping (hardwiring) methods.

After some searching I found one app I had a vague recollection of: http://www.protocol-builder.com/

Community
  • 1
  • 1
Tedd Hansen
  • 12,074
  • 14
  • 61
  • 97
  • You say *latency* is a big issue, but latency is pretty much fixed. Is *bandwidth* your bottleneck here? I can suggest ways to reduce bandwidth within WCF if you want a compromise between the conveniece of WCF and the performance of a custom protocol... – Marc Gravell May 24 '11 at 16:36
  • With latency I mean the time it takes from a physics event (such as a collision, which could be in the thousands "at the same time") to be processed and acted upon. (Yes I will have to throttle number of events as well.) – Tedd Hansen May 24 '11 at 17:07
  • an effective way to reduce the cost of that is to send a single message with a larger payload. Which ties back into the bandwidth reduction. I'm pretty fond of swapping out the serializer, for example (DataContractSerializer is not known for being terse) – Marc Gravell May 24 '11 at 17:11

1 Answers1

8

Reducing the total traffic (which would be the main benefit to a custom protocol vs. using WCF) is not going to have a large effect on latency. The main issue would be keeping the amount of data for the "required parameters" to a minimum, so each request is relatively small. The default serialization in WCF is fairly efficient already, especially when using TCP on a local network.

In a scenario like you're describing, with many clients connecting to a centralized server, it is unlikely that the transport of the messages itself will be the bottleneck - processing the specific messages will likely be the bottleneck, and the transport mechanism won't matter as much.

Personally, I would just use WCF, and not bother with trying to build a custom protocol. If, once you have it running, you find that you have a problem, you can easily abstract out the transport to a custom protocol at that time, and map it to the same interfaces. This will require very little extra code vs. just doing a custom protocol stack up front, and likely keep the code much cleaner (since the custom protocol will be isolated into a single mapping to the "clean" API). If, however, you find the transport is not a bottleneck, you will have saved yourself a huge amount of labor and effort.

Reed Copsey
  • 554,122
  • 78
  • 1,158
  • 1,373