0

I have written a Java server for remote storage (an iSCSI Target). The client can write data by sending sequences of packets carrying the data payload. These packets consist of a fixed-length header (48 bytes) followed by a variable-length data segment. The length of data segment is specified in the header and can be considered fixed (8KiB).

Receiving a data packet is a two-part process. Firstly the header is read to a ByteBuffer with a size of 48 bytes. Immediately afterwards a second ByteBuffer is created via ByteBuffer.allocate(...). The size of this second buffer matches the data segment length specified in the header. Then the data segment is read into the second ByteBuffer using the SocketChannel.read(ByteBuffer) method. In the simple case this process works as expected - larger data segments and longer sequences increase IO speed. By "simple case" I mean that there is a single Thread which uses a blocking SocketChannel to receive (and process) packets. However if a second Thread with its own TCP connection and associated SocketChannel is added, the SockerChannel.read(ByteBuffer) execution times rise to more than 2.5ms, while the client server is sending 32KiB write commands (i.e. 4 consecutive data packets) on both connections. This is an increase by a factor of 8 to 10.

I would like to stress that at this stage the two Threads do not share any resources apart from the same network interface card. Each SocketChannel's read buffer size is 43690 bytes (larger sizes didn't have any effect on this phenomenon).

Any ideas what might be causing this or how this problem could be fixed?

andreas
  • 31
  • 3

1 Answers1

1

... while the client server is sending 32KiB write commands (i.e. 4 consecutive data packets) on both connections.

Can you provide some details about the test setup? Is the client sending the packets serially to both connections? Then depending on the setup the increase could be client driven.

Is it a localhost setup (client and server on one machine) or are client and server on different hosts? Have you tested both? Don't trick yourself into seeing an execution time increase in a localhost setup especially if there is only one cpu and the test client also runs locally and maybe even single threaded.

  • The client and the server are running on different machines, both with plenty of idle cores. But you are right, the fault seems to lie on the client side. I didn't see this at first, because I had too much faith in the quality of my testing software and switched versions during my preliminary measurements. I was using [iometer](http://www.iometer.org/) together with the Microsoft iSCSI Initiator and one of these seems to introduce large delays, irrespective of which target/server it is connected to. – andreas Feb 26 '12 at 18:42