0

I deployed a Kubernetes cluster using the RabbitMQ Operator and activated the rabbitmq_stream plugin. This is my yaml:

apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
  name: rabbitmq-deployment
  namespace: rabbitmq-namespace
spec:
  replicas: 2
  image: rabbitmq:3.11.13
  persistence:
    storage: 20Gi
  service:
    type: LoadBalancer
  rabbitmq:
    additionalPlugins:
      - rabbitmq_stream
      - rabbitmq_stream_management

Also i use the RabbitMQ Java Stream Client and i'm connecting to the cluster like this:

EnvironmentBuilder environmentBuilder = Environment.builder();
environmentBuilder.host(System.getenv("RABBITMQ_HOST"));
environmentBuilder.port(Integer.parseInt(System.getenv("RABBITMQ_STREAM_PORT")));
environmentBuilder.username(System.getenv("RABBITMQ_USERNAME"));
environmentBuilder.password(System.getenv("RABBITMQ_PASSWORD"));
mainConnection = environmentBuilder.build();

Now when i use this client so create the stream, it's working flawlessy and no error is reported:

mainConnection.streamCreator().stream("mystream").maxAge(Duration.of(1, ChronoUnit.DAYS)).create()

Now when i try to produce messages like this:

Producer producer = RabbitMQStreamConnection.mainConnection.producerBuilder().stream("mystream").build();
byte[] messagePayload = "hello".getBytes(StandardCharsets.UTF_8);
producer.send(
    producer.messageBuilder().addData(messagePayload).build(),
    confirmationStatus -> {
        if (confirmationStatus.isConfirmed()) {
            // the message made it to the broker
        } else {
            // the message did not make it to the broker
        }
});

It throws this exception:

com.rabbitmq.stream.StreamException
Error while creating stream connection to rabbitmq-deployment-server-0.rabbitmq-deployment-nodes.rabbitmq-namespace:5552

Of course, because there are two nodes (replicas = 2) and it seems like traffic gets redirected directly.

What i want is that i can produce & consume messages from the stream.

Right now, i have no clue what i could do next to solve this problem.

Stefan
  • 2,028
  • 2
  • 36
  • 53

1 Answers1

1

You should use the load balancer configuration.

See: https://rabbitmq.github.io/rabbitmq-stream-java-client/stable/htmlsingle/#when-a-load-balancer-is-in-use

A load balancer can misguide the client when it tries to connect to nodes that host stream leaders and replicas. The "Connecting to Streams" blog post covers why client applications must connect to the appropriate nodes in a cluster and how a load balancer can make things complicated for them.

The EnvironmentBuilder#addressResolver(AddressResolver) method allows intercepting the node resolution after metadata hints and before connection. Applications can use this hook to ignore metadata hints and always use the load balancer, as illustrated in the following snippet:

Using a custom address resolver to always use a load balancer

Address entryPoint = new Address("my-load-balancer", 5552);  
Environment environment = Environment.builder()
    .host(entryPoint.host())  
    .port(entryPoint.port())  
    .addressResolver(address -> entryPoint)  
    .build();
Gabriele Santomaggio
  • 21,656
  • 4
  • 52
  • 52
  • ah i think you are pushing me into the right direction. seems that i only can connect from inside of the kubernetes cluster. for the outside (mainly for testing purposes) i need the addressresolver. – Stefan Apr 20 '23 at 09:23
  • >seems that i only can connect from inside of the kubernetes cluster. The protocol needs to know the host-names but with the `addressResolver` you can also connect from outside See: https://blog.rabbitmq.com/posts/2021/07/connecting-to-streams/#with-a-load-balancer – Gabriele Santomaggio Apr 20 '23 at 12:48
  • i understand, but the nodes (pods) aren't exposed publicly. i think i need to expose them (loadbalancer or some other way) then it would work from outside, right? – Stefan Apr 20 '23 at 14:48
  • Yes exactly. The load balancer must be public and implement the round robin pattern – Gabriele Santomaggio Apr 20 '23 at 18:38
  • Hey there, i got it working, but when closing the consumer (for example because i need to process the last 300 messages first), i get: com.rabbitmq.stream.impl.TimeoutStreamException: Could not get response in 10000 ms from node my.public.ip:5552 - Consuming works fine, but i receive that error when closing the consumer (also when closing the environment, tried both). I use manual tracking with offsetspecification = last and also a custom name which stays the same every time. – Stefan Apr 25 '23 at 12:24