1

I am trying to send asynchronously a large amount a http posts requests to one server. My goals is to compare each response to its orginal request.

To do so I am following the Netty Snoop example.

However, this example (and the other http examples) do not cover how to send multiple requests asynchrously, nor how to link them subsequently to the corresponding requests.

All similiar questions (such as this one, this one, or this one, implement the SimpleChannelUpstreamHandler class, which is from netty 3 and does not exists in 4.0 anymore (documentation netty 4.0)

Anyone has an idea how to solve this in netty 4.0?

Edit:

My problem is although I write lots of messages to the channel, I only receive very slowly the responses (1 response/sec, whereas a hope to receive few thousand / sec) . To clarify this, let me post what I got so far. I am sure that the server I send the requests too can handle lots of traffic.

What I got so far:

import java.net.URI
import java.nio.charset.StandardCharsets
import java.io.File

import io.netty.bootstrap.Bootstrap
import io.netty.buffer.{Unpooled, ByteBuf}
import io.netty.channel.{ChannelHandlerContext, SimpleChannelInboundHandler, ChannelInitializer}
import io.netty.channel.socket.SocketChannel
import io.netty.channel.socket.nio.NioSocketChannel
import io.netty.handler.codec.http._
import io.netty.handler.timeout.IdleStateHandler
import io.netty.util.{ReferenceCountUtil, CharsetUtil}
import io.netty.channel.nio.NioEventLoopGroup

import scala.io.Source

object ClientTest {

  val URL = System.getProperty("url", MY_URL)     
  val configuration = new Configuration

  def main(args: Array[String]) {
    println("Starting client")
    start()
  }

  def start(): Unit = {

    val group = new NioEventLoopGroup()

    try {

      val uri: URI = new URI(URL)
      val host: String= {val h = uri.getHost(); if (h != null) h else "127.0.0.1"}
      val port: Int = {val p = uri.getPort; if (p != -1) p else 80}

      val b = new Bootstrap()

      b.group(group)
      .channel(classOf[NioSocketChannel])
      .handler(new HttpClientInitializer())

      val ch = b.connect(host, port).sync().channel()

      val logFolder: File = new File(configuration.LOG_FOLDER)
      val fileToProcess: Array[File] = logFolder.listFiles()

      for (file <- fileToProcess){
        val name: String = file.getName()
        val source = Source.fromFile(configuration.LOG_FOLDER + "/" + name)

        val lineIterator: Iterator[String] = source.getLines()

        while (lineIterator.hasNext) {
            val line = lineIterator.next()
            val jsonString = parseLine(line)
            val request = createRequest(jsonString, uri, host)
            ch.writeAndFlush(request)
        }
        println("closing")
        ch.closeFuture().sync()
      }
    } finally {
      group.shutdownGracefully()
    }
  }

  private def parseLine(line: String) = {
    //do some parsing to get the json string I want
  }

  def createRequest(jsonString: String, uri: URI, host: String): FullHttpRequest = {
    val bytebuf: ByteBuf = Unpooled.copiedBuffer(jsonString, StandardCharsets.UTF_8)

    val request: FullHttpRequest = new DefaultFullHttpRequest(
      HttpVersion.HTTP_1_1, HttpMethod.POST, uri.getRawPath())
    request.headers().set(HttpHeaders.Names.HOST, host)
    request.headers().set(HttpHeaders.Names.CONNECTION, HttpHeaders.Values.KEEP_ALIVE)
    request.headers().set(HttpHeaders.Names.ACCEPT_ENCODING, HttpHeaders.Values.GZIP)
    request.headers().add(HttpHeaders.Names.CONTENT_TYPE, "application/json")

    request.headers().set(HttpHeaders.Names.CONTENT_LENGTH, bytebuf.readableBytes())
    request.content().clear().writeBytes(bytebuf)

    request
  }
}

class HttpClientInitializer() extends ChannelInitializer[SocketChannel] {

  override def initChannel(ch: SocketChannel) = {
  val pipeline = ch.pipeline()

  pipeline.addLast(new HttpClientCodec())

  //aggregates all http messages into one if content is chunked
  pipeline.addLast(new HttpObjectAggregator(1048576))

  pipeline.addLast(new IdleStateHandler(0, 0, 600))

  pipeline.addLast(new HttpClientHandler())
  }
}

class HttpClientHandler extends SimpleChannelInboundHandler[HttpObject] {

  override def channelRead0(ctx: ChannelHandlerContext, msg: HttpObject) {
    try {
      msg match {
        case res: FullHttpResponse =>
          println("response is: " + res.content().toString(CharsetUtil.US_ASCII))
          ReferenceCountUtil.retain(msg)
      }
    } finally {
      ReferenceCountUtil.release(msg)
    }
  }

  override def exceptionCaught(ctx: ChannelHandlerContext, e: Throwable) = {
    println("HttpHandler caught exception", e)
    ctx.close()
  }
}
Community
  • 1
  • 1
Mart
  • 56
  • 1
  • 5
  • Isn't write to channel asynchronous? As a result of write you'll get Future, which is up to you how to deal with it – user1582639 May 09 '16 at 13:50
  • I also learning Netty 4.0. Here is my understanding of design. The first thing i keep in mind is that in Netty 4 gives you the confidence that all registered handlers are executed in single thread, so no need of synchronization, unless you use Shared handlers. Therefore all your submitted requests will be sent sequentially through the channel, and responses will be received in the same sequence. So managing data structure like queue in your duplex handler for all requests you can always poll corresponding request for latest received response. – user1582639 May 09 '16 at 20:48
  • Thanks you for the replies! My problem is although I write lots of messages to the channel, I only receive very slowly the responses (1 response/sec, whereas a hope to receive few thousand / sec) . To clarify this, let me post what I got so far. – Mart May 10 '16 at 12:28
  • Can you scale your event loop group with more number of threads and check if performance increased for response flow? – user1582639 May 10 '16 at 15:20

1 Answers1

0

ChannelFuture cf = channel.writeAndFlush(createRequest());

nor how to link them subsequently to the corresponding requests.

Can netty assign multiple IO threads to the same Channel?

The worker thread once assigned for a channel does not change for the lifetime of the channel. So we do not benefit from the threads. This is because you are keeping the connection alive and so is the channel kept alive.

To fix this problem, you might consider a pool of channels (say 30). Then use the channel pool to place your requests.

      int concurrent = 30;

  // Start the client.
  ChannelFuture[] channels = new ChannelFuture[concurrent];
  for (int i = 0; i < channels.length; i++) {
    channels[i] = b.connect(host, port).sync();
  }

  for (int i = 0; i < 1000; i++) {
      ChannelFuture requestHandle = process(channels[(i+1)%concurrent]); 
      // do something with the request handle       
  }

  for (int i = 0; i < channels.length; i++) {
    channels[i].channel().closeFuture().sync();
  }

HTH

Community
  • 1
  • 1
Amod Pandey
  • 1,336
  • 3
  • 14
  • 22
  • I think that even with one channel he should achieve more the 1 message per second from server side. My assumption is that author piles up channel with requests not giving chance to process responses. – user1582639 May 10 '16 at 18:30
  • concurrency - time (milliseconds) to process 1000 requests (will vary based on the endpoint) 1 - 55219 10 - 48749 30 - 13364 100 - 29106 till a time increase in number of threads improves the performance but later contextual switch impacts the performance. – Amod Pandey May 10 '16 at 22:58
  • user1582639 what you mentioned was indeed one of the problems. Limiting the amount of requests improved performance a lot. @AmodPandey: using a pool of channels also works, and improves performances even further. However, I still don't understand how to link the intiatial resquests to the corresponding response. writeAndFlush returns indeed a ChannelFurture, but if I add a listener to that it will complete once the request has been send, which does not enable me to connect it somehow to the response. – Mart May 12 '16 at 12:33
  • Mart - By the virtue of its design, in the ClientTest we can only figure out whether a request was placed successfully on to the endpoint or not. The HttpClientHandler will handle any response from the server (it will not know the corresponding request). But if you really want the correlation between request and response then there are ways to do that. 1. In a ChannelOutboundHandler set attribute on the channel which you can retrieve in HttpClientHandler. 2. Having the server that returns a request id that is sent in the request. On another note you might be good using Apache Sync Http Client – Amod Pandey May 14 '16 at 12:56
  • Also have a look at https://github.com/netty/netty/blob/4.1/example/src/main/java/io/netty/example. Please let me know if you would like to see sample code for above suggestion. – Amod Pandey May 14 '16 at 12:56