0

I am trying to implement a netty decoder for a stream of bytes received on TCP.

This is the current implementation:

import io.netty.handler.codec.ByteToMessageDecoder
import io.netty.channel.ChannelHandlerContext
import io.netty.buffer.ByteBuf
import io.netty.handler.codec.compression.{ZlibCodecFactory,ZlibWrapper}
import java.util.List

object MyCustomDecoder extends ByteToMessageDecoder {

  val GZIP_HEADER_1 = 0x1F
  val GZIP_HEADER_2 = 0x8B

  def decode(ctx: ChannelHandlerContext,in: ByteBuf,out: List[AnyRef]): Unit = {

    // Read values in Little Endian
    val blockTime = java.lang.Long.reverseBytes(in.readLong()) // little endian
    val blockSeq = java.lang.Integer.reverseBytes(in.readInt()) // little endian
    val blockSize = java.lang.Integer.reverseBytes(in.readInt()) // little endian

    // Checks if compressed block size matches block size published value
    if(in.readableBytes() - 16 == blockSize){

      val h1 = in.getUnsignedByte(in.readerIndex())
      val h2 = in.getUnsignedByte(in.readerIndex()+1)

      //
      if(isGzip(h1, h2))
        enableGzip(ctx)
      else 
        System.err.println("Not in GZip format")      
    }    

  }

  // Check if header matches GZIP for decompression
  private def isGzip(h1: Int, h2: Int):Boolean = h1 == GZIP_HEADER_1 && h2 == GZIP_HEADER_2

  private def enableGzip(ctx: ChannelHandlerContext): Unit = {
    val p = ctx.pipeline()
    //p.addLast("gzipdeflater", ZlibCodecFactory.newZlibEncoder(ZlibWrapper.GZIP))
    p.addLast("gzipinflater", ZlibCodecFactory.newZlibDecoder(ZlibWrapper.GZIP))
    //p.remove(this)
  }

}

The decoder is supposed to take in the message using the netty implementation, verify that the data in the message matches the actual block size that is published in the message header and then send to the next level of the pipeline which inflates the compressed content using GZip.

My current implementation is able to read the data, but I am unsure as to how to remove the message header bytes from the output that gets passed to the next pipeline level (namely: blockTime, blockSeq, blockSize ) in order to have a workable message that gets decompressed and then pass to the end message handler.

My data definition is as follows: <blockTime><blockSequence><blockSize><block>

I have based myself on this example to create this decoder subclass for the netty pipeline.

Any help with this would be greatly appreciated.

Thank you.

autronix
  • 2,885
  • 2
  • 22
  • 28

1 Answers1

0

There are two steps to perform:

  1. Remove the bytes from the ByteBuf object (in)

This is already achieved in the example by performing the following (read[Type]() essentially moves the ByteBuf index pointer):

val blockTime = java.lang.Long.reverseBytes(in.readLong()) // little endian
val blockSeq = java.lang.Integer.reverseBytes(in.readInt()) // little endian
val blockSize = java.lang.Integer.reverseBytes(in.readInt()) // little endian

Additionally the previous ("used") bytes can be removed from the ByteBuf object by issuing the following statement: in.discardReadBytes()

  1. Pass the message along to the next level

Assuming that the next decoder/handler in the pipeline takes a ByteBuf as an input (for example another ByteToMessageDecoder subclass), this is simply achieved as follows:

out.add(in)

Given that the header bytes have already been removed in (1), what is passed to then next pipeline level is the remaining bytes represented by the ByteBuf object.

Then the processing will continue as necessary following the pipeline structure defined in the initializer.

autronix
  • 2,885
  • 2
  • 22
  • 28