1

In a previous Stack Overflow question, people were super helpful in showing me the error of my ways while building my Akka socket server and I actually now have an Akka socket client that can send messages with the following framing:

message length: 4 bytes message type: 4 bytes message payload: (length) bytes

Here's the iOS code that I'm using to send the message:

 NSInputStream *inputStream;
    NSOutputStream *outputStream;

    CFReadStreamRef readStream;
    CFWriteStreamRef writeStream;
    CFStreamCreatePairWithSocketToHost(NULL, (CFStringRef)@"localhost", 9999, &readStream, &writeStream);
    inputStream = (__bridge_transfer NSInputStream *)readStream;
    outputStream = (__bridge_transfer NSOutputStream *)writeStream;

    [inputStream setDelegate:self];
    [outputStream setDelegate:self];

    [inputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
    [outputStream scheduleInRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];

   // [inputStream open];
    [outputStream open];
    NSLog(@"NSData raw zombie is %d bytes.", [rawZombie length]);
    uint32_t length = (uint32_t)htonl([rawZombie length]);
    uint32_t messageType = (uint32_t)htonl(1);        

    NSLog(@"Protobuf byte size %d", zombieSighting->ByteSize());

    [outputStream write:(uint8_t *)&length maxLength:4];
    [outputStream write:(uint8_t *)&messageType maxLength:4];
    [outputStream write:(uint8_t *)[rawZombie bytes] maxLength:length];

    [outputStream close];

The 'rawZombie' variable (NSData *) comes from the following method:

- (NSData *)getDataForZombie:(kotancode::ZombieSighting *)zombie {
    std::string ps = zombie->SerializeAsString();
    NSLog(@"raw zombie string:\n[%s]", ps.c_str());
    return [NSData dataWithBytes:ps.c_str() length:ps.size()];
}

The symptom I'm seeing is that I receive the message, sent by iOS, and it's length is correct, as is the message type (1), and the body comes across fine. The Akka server using the Scala protobufs de-serializes the message and prints out all of the values on the message perfectly. The problem is, immediately after I get that message, the Akka server thinks it got another message (more data came in on the stream, apparently). The data at the end is not the same each time I run the iOS app.

For example, here's some trace output from two consecutive message receives:

received message of length 45 and type 1
name:Kevin
lat: 41.007
long: 21.007
desc:This is a zombie
zombie type:FAST
received message of length 7 and type 4
received message of length 45 and type 1
name:Kevin
lat: 41.007
long: 21.007
desc:This is a zombie
zombie type:FAST
received message of length 164 and type 1544487554

So you can see that right after the Akka server receives the right data for the message, it also receives some random arbitrary crap. Given that I have the Akka client working properly without this extra arbitrary crap, I am assuming that there's something wrong with how I am writing the protobuf object to the NSStream... Can anybody spot my stupid mistake, as I'm sure that's what is happening here.

Community
  • 1
  • 1
Kevin Hoffman
  • 5,154
  • 4
  • 31
  • 33
  • Did you log the lengths of the messages you *sent* for comparison? – Marc Gravell Feb 03 '13 at 19:39
  • All of the length and byte counts are 45. [rawZombie length] is 45 on the sending(iOS) side, which is the same as the ByteSize() method on the protobuf object (returns 45). – Kevin Hoffman Feb 03 '13 at 19:49
  • is the "length 7" part of the bad data, then? Also: most times I have seen this, it has either related to people sending the backing buffer of a MemoryStream (rather than the trimmed data), or messing up the network IO. Is there any chance this is the case here too? – Marc Gravell Feb 03 '13 at 20:06
  • Length 7 is part of the bad data ... it just means that there was a number 7 in the 4 bytes following the legitimate protobuf message. I did a little check and the NSStream "write" method returns the number of bytes written to the stream. In my case, it wrote 653276 bytes. Clearly that's where the extra crap is coming from, but I don't know -why- this method wrote that much, especially when I told it the max length was 45. – Kevin Hoffman Feb 03 '13 at 20:18

1 Answers1

2

Bloody hell. I can't believe I didn't see this. In the line of code here:

[outputStream write:(uint8_t *)[rawZombie bytes] maxLength:length]

I am using the value "length" for the maximum number of bytes to transmit. Unfortunately, that value has already had it's "endian" order flipped in preparation for transmission across the network. I replaced "length" with [rawZombie length] and it worked like a charm.

:(

Kevin Hoffman
  • 5,154
  • 4
  • 31
  • 33