0

I'm using an SSL connection to connect with a server (which i have no control over and no access to it's code, could be it's fault, but i wanna be sure), when i send the data (a byte array) for the first time i get the correct response, but in subsequent sends i get the response expected by the previous send. for example, let's say if i send x, i expect the server to reply x, y to y, z to z, etc...

when the app starts, i call x, and get x. but then i call y and get x, call z and get y, call x and get z, etc... here's the generic code implemented for each command to send and receive (bytes is initiated with a predetermined set of bytes to simulate, say, command x)

byte[] bytes =  new byte[6];
if(socket == null || !socket.isConnected() || socket.isClosed())
try {
    getSocket(localIp);
} catch (IOException e1) {
    e1.printStackTrace();
}

if (socket == null || !socket.isConnected()) {
    try {
        getSocket(globalIp);
    } catch (IOException e1) {
        e1.printStackTrace();
        return null;
    }
}

byte[] recievedBytes = null;

String sentBString = "sendGetConfig: ";
for (int i = 0; i < bytes.length; i++) {
    sentBString += String.valueOf(bytes[i]) + ", ";
}
System.out.println(sentBString);

if (socket != null){
    try {
        DataOutputStream os = new DataOutputStream(socket.getOutputStream());
        os.write(bytes);

        DataInputStream is = new DataInputStream(new BufferedInputStream(socket.getInputStream()));
        int tries = 0;
        while (tries < 20 && (recievedBytes == null || recievedBytes.length == 0)) {
            if (is.markSupported()) {
                is.mark(2048);
            }

            ByteArrayOutputStream buffer = new ByteArrayOutputStream();
            int nRead;
            byte[] data = new byte[1024];

            try {
                nRead = is.read(data, 0, data.length);
                buffer.write(data, 0, nRead);
            } catch (Exception e) {
            }

            buffer.flush();

            recievedBytes = buffer.toByteArray();
            if (recievedBytes.length == 0)
                is.reset();
        }

        is.close();
        os.close();
        socket.close();
    }
}

i know the implementation of the read is not perfect, its the result of a workaround i did because the server does not send any end of stream indication and so any read command implemented in a loop results in a timeout exception

any help will be greatly appreciated

jww
  • 97,681
  • 90
  • 411
  • 885
  • Test the server with `$ echo -e "GET / HTTP/1.0\r\n" | openssl s_client -connect www.google.com:443 -ign_eof`. – jww Jun 23 '14 at 07:56

1 Answers1

1

the server does not send any end of stream indication

Of course the server sends an EOS indication. The problem is that you're completely ignoring it. When the peer has closed the connection and there is no more pending data to be read, read() returns -1.

and so any read command implemented in a loop results in a timeout exception

Nonsense.

The correct form of your loop is as follows:

while ((count = in.read(buffer)) > 0)
{
    out.write(buffer, 0, count);
}

substituting your own variable names as required.

The reason you keep reading the same data is because you keep resetting the stream to the same point. Just throw your mark() and reset() calls away.

user207421
  • 305,947
  • 44
  • 307
  • 483
  • no, still doesn't work... any bytearray (buffer) initiated to more than 10 length will resolve in a timeout exception. which coincidentally is the number of bytes supposed to be sent... – user3682862 Jun 24 '14 at 09:33