0

It seems that a verticle is only ever executed by a single thread and always by the same thread. But Vert.x is capable of using all the CPUs in the machine by creating one thread per CPU. Each thread can send messages to multiple verticles.

But when profiling a vertx http server on a standard verticle during a performance test, I can only ever see 1 thread handling all the processing (vert.x-eventloop-thread-0).

What can I do to have all 8 of my event loop threads processing messages to the verticle?

CountDownLatch latch = new CountDownLatch(1);
final List<Throwable> errors = new ArrayList<>();

local = new Local(getVertx(), settings, options, connectionHandler, exceptionHandler);

LocalVerticle.vertxDeployMap.put(local.hashCode(), local);

DeploymentOptions dop = new DeploymentOptions()
        .setInstances(new VertxOptions()
        .getEventLoopPoolSize())
        .setConfig(new JsonObject().put("local", local.hashCode()));

Network.getVertx().deployVerticle(LocalVerticle.class.getName(), dop, (event) -> {
    if (!event.failed()) {
        latch.countDown();
    } else {
        errors.add(event.cause());
    }
});

boolean await = latch.await(10, TimeUnit.SECONDS);
if (!await) {
    if (errors.isEmpty()) {
        throw new Exception("Failed to initialize Local Verticle");
    } else {
        throw new Exception("Failed to initialize Local Verticle", errors.get(0));
    }
}

LocalVerticle.vertxDeployMap.remove(local.hashCode());


public class LocalVerticle extends AbstractVerticle {

    static final Map<Integer, Local> vertxDeployMap = new HashMap<>();
    
    private HttpServer httpServer;

    @Override
    public void start() throws Exception {
        
        Local local = vertxDeployMap.get(this.config().getInteger("local"));

        HttpServerOptions options = local.getOptions();
        Handler<HttpConnection> connectionHandler = local.getConnectionHandler();
        Handler<Throwable> exceptionHandler = local.getExceptionHandler();
        Router router = local.getRouter();

        this.httpServer = this.vertx
                .createHttpServer(options)
                .exceptionHandler(exceptionHandler)
                .connectionHandler(connectionHandler)
                .requestHandler(router)
                .listen();

    }

    @Override
    public void stop() throws Exception {
        if (this.httpServer != null) {
            this.httpServer.close();
            this.httpServer = null;
        }
    }
}
diesel10
  • 1
  • 4

1 Answers1

1

The Vert.x threading model is designed so that a particular instance of a deployed Verticle is always locked to a single thread. In order to scale your application across cores, you need to deploy multiple instances of your Verticle.

When deploying a verticle using a verticle name, you can specify the number of verticle instances that you want to deploy:

DeploymentOptions options = new DeploymentOptions().setInstances(16);
vertx.deployVerticle("com.mycompany.MyOrderProcessorVerticle", options);

This is useful for scaling easily across multiple cores. For example you might have a web-server verticle to deploy and multiple cores on your machine, so you want to deploy multiple instances to utilise all the cores.

dano
  • 91,354
  • 19
  • 222
  • 219
  • Thanks for pointer. That helped. But the http server created on this vertx is not equally using all the deployed verticles. Is there a way to indicate to the http server to scale across all of these verticles? – diesel10 Aug 31 '20 at 23:06
  • What do you mean by "not equally using all the deployed verticles"? If you want an http server to scale across cores, you have to deploy multiple instances of the verticle that creates the http server, and have all the instances bind the server to the same port. Vert.x is smart enough to round-robin requests across all the instances of the HTTP server. – dano Sep 01 '20 at 03:17
  • I added my basic code structure above. And am sending requests into the server and adding small processing delay in the handler on the Router on the http server to affect even scaling over the event loop threads. The profiler still shows only 1 thread ever handles the processing. Perhaps we need to use the start or init methods of the verticle class itself? – diesel10 Sep 01 '20 at 13:08
  • I'm a little confused by your code. You're deploying 8 instances of a verticle, but it doesn't appear to actually do anything. Is there code in there that you're just not sharing? If you're just creating a single instance of the HttpServer outside of any Verticle, the handlers for all your endpoints are all going to execute in a single thread. In order to have the handlers for the HttpServer scale across cores, you have to create the HttpServer *inside* a Verticle (in its `start` method), and deploy multiple instances of that Verticle. – dano Sep 01 '20 at 13:29
  • OK I see what you mean now. Am setting it up to do it that way, but struggling to pass in the needed args (via Context) to the start function of the verticle (http server options, the router, handlers). Seeing that the member Context on the AbstractVerticle is not the same as the one I created with "getOrCreateContext" API. – diesel10 Sep 01 '20 at 13:53
  • @user1402263 I'm missing some context here (none of the code you shared has a `getOrCreateContext` call). However, you can pass configuration data into a Verticle via `DeploymentOptions` you pass when you deploy the Verticle. See [`DeploymentOptions.setConfig`](https://vertx.io/docs/apidocs/io/vertx/core/DeploymentOptions.html#setConfig-io.vertx.core.json.JsonObject-). – dano Sep 01 '20 at 14:35
  • I updated the approach am using now above. And was able to use the same http server across all the verticle instances. But unfortunately it is still using the single event loop thread for processing. – diesel10 Sep 01 '20 at 15:12
  • I don't know what you mean by "processing", since you haven't shown that code at all. However, I don't think re-using the same Router instance for multiple HttpServers is supported, so it's possible that is the reason you don't see multiple threads in use. – dano Sep 01 '20 at 15:30
  • We could consider abandoning the verticles model and consider another way to start the vertx web server with a multi threaded event loop executor. This thread is related : https://stackoverflow.com/questions/49775238/vertx-web-server-uses-only-one-event-loop-thread-while-16-are-available – diesel10 Sep 01 '20 at 15:51
  • Regards to the processing of the incoming requests, all we are doing here is sleep 10ms and then send 200ok response. – diesel10 Sep 01 '20 at 16:05
  • I was able to confirm that when using separate connections per request, the processing is evenly distributed over the threads. We have a use case where the http2 client will be reusing the connection. I will investigate with the vertx API to have this scaled over the threads with the single connection and multiple http2 streams scenario. – diesel10 Sep 02 '20 at 14:45