3

I have a Spring Boot service that streams updates to the client using Server-Sent Events (SSE). The endpoint to which the client connects is implemented using Spring WebFlux.

To clean up resources (delete an AMQP queue) my service needs to detect when a client closes the EventSource, i.e. terminates the connection. To do so, I register a callback via FluxSink#onDispose(Disposable). Naturally, my SSE Flux sends regular heartbeats to not only prevent the connection from timing out but also to trigger onDispose once the client has disconnected.

@Nonnull
@Override
public Flux<ServerSentEvent<?>> subscribeToNotifications(@Nonnull String queueName) {
    final var queue = createQueue(queueName);
    final var listenerContainer = createListenerContainer(queueName);
    final var notificationStream = createNotificationStream(queueName, listenerContainer);
    return notificationStream
            .mergeWith(heartbeatStream)
            .map(NotificationServiceImpl::toServerSentEvent);
}

@Nonnull
private Flux<NotificationDto> createNotificationStream(
        @Nonnull String queueName,
        @Nonnull MessageListenerContainer listenerContainer) {
    return Flux.create(emitter -> {
        listenerContainer.setupMessageListener(message -> handleAmqpMessage(message, emitter));
        emitter.onRequest(consumer -> listenerContainer.start());
        emitter.onDispose(() -> {
            final var deleted = amqpAdmin.deleteQueue(queueName);
            if (deleted) {
                LOGGER.info("Queue {} successfully deleted", queueName);
            } else {
                LOGGER.warn("Failed to delete queue {}", queueName);
            }
            listenerContainer.stop();
        });
    });
}

This works like a charm locally; the queue is deleted/messages are logged once the client disconnects.

However, when deploying this service to my Kubernetes cluster, onDispose is never called. The SSE stream still works flawlessly, i.e. the client receives all data from the server and the connection is kept alive by the heartbeat.

I'm using a NGINX Ingress Controller to expose my service and it seems as the connection between NGINX and my service is kept alive even after the client disconnects, causing onDispose to never be called. Hence I tried setting the upstream keep-alive connections to 0, but it didn't solve the problem - the service is never notified about the client having closed the connection:

# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.6
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.0.4
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: 'true'
  http-snippet: |
    server{
      listen 2443;
      return 308 https://$host$request_uri;
    }
  proxy-real-ip-cidr: 192.168.0.0/16
  use-forwarded-headers: 'true'
  upstream-keepalive-connections: '0' # added this line

What am I missing?


IggyBlob
  • 372
  • 2
  • 13
  • Do you know where exactly is the problem? Please provide reproduction steps. How did you set up your cluster? – Mikołaj Głodziak Nov 22 '21 at 13:40
  • Yes, the problem is that the upstream connection between the ingress and my service is kept alive, i.e. is not closed when the connection between the client and the ingress is terminated. The disconnect detection works fine locally and also when I use `type: LoadBalancer` in my service descriptor but doesn't anymore when the ingress is used. – IggyBlob Nov 22 '21 at 17:07
  • I set up the cluster using eksctl and deployed [NGINX Ingress Controller v1.0.4](https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/aws/deploy.yaml) as well as [TLS termination on load balancer](https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.0.4/deploy/static/provider/aws/deploy-tls-termination.yaml). – IggyBlob Nov 22 '21 at 17:15
  • Did you try to set [`upstream-keepalive-connections`](http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive) to `1`? – Mikołaj Głodziak Nov 23 '21 at 07:29
  • Yes, setting it in the config map ([screenshot](https://ibb.co/G9CCYC9)) causes nginx.conf to be adapted ([screenshot](https://ibb.co/DPJzxYC)) but the upstream connection still seems to be kept alive – IggyBlob Nov 23 '21 at 17:51
  • Do you have some logs from your application? How do you know the function `onDispose` is not being called? – Mikołaj Głodziak Nov 25 '21 at 12:33
  • Sure, my service neither logs `Queue {} successfully deleted` nor `Failed to delete queue `, hence the `onDispose` callback is never invoked. – IggyBlob Nov 25 '21 at 20:50
  • hey, did you find an answer? – Subhi Samara Apr 10 '22 at 05:53
  • Nope, sadly I didn't – IggyBlob Apr 17 '22 at 16:34

0 Answers0