0

I have built a distributed client-server-database Java application based on sockets. The clients sends serialized objects to the servers (currently there are two servers) and the servers deserializes the objects and stores some content of the object in a postgreSQL database.

Now I'm benchmarking my system and I measured the size of the serialized object as well as the throughput and I made a very strange discovery which I cannot explain.

Until the object size reaches around 1400 Bytes or a bit less the throughput decreases, but then from object size of around 1400 Bytes till 2000 Bytes (or a bit above) the throughput stays constant and from an object size of around 2000 Bytes till 2600 Bytes (I only measured it till 2600 Bytes) the throughput increases.

I cannot explain this behaviour. My thinking was that the throughput will always decrease with increasing object size and if the MTU of 1500 Bytes is reached the decrease will be much bigger. But this seems not to be true and especially the constant throughput and the increase I cannot explain at all.

machinery
  • 5,972
  • 12
  • 67
  • 118

0 Answers0