14

I have a real-time application with clients using websockets to connect with a Spring Framework server, which is running Spring Boot Tomcat. I want the server to quickly (within 5 seconds) detect when a client stops responding due to a network disconnect or other issue and close the websocket.

I have tried

  1. Setting the max session idle timeout as described in the documentation as "Configuring the WebSocket Engine" http://docs.spring.io/spring/docs/current/spring-framework-reference/html/websocket.html

    @Bean
    public WebSocketHandler clientHandler() {
        return new PerConnectionWebSocketHandler(ClientHandler.class);
    }
    @Bean
    public ServletServerContainerFactoryBean createWebSocketContainer() {
        ServletServerContainerFactoryBean container = 
            new ServletServerContainerFactoryBean();
        container.setMaxSessionIdleTimeout(5000);
        container.setAsyncSendTimeout(5000);
        return container;
    }
    

I am not sure this is implemented correctly because I do not see the link between the ServletServerContainerFactoryBean and my generation of ClientHandlers.

  1. Sending ping messages from server every 2.5 seconds. After I manually disconnect the client by breaking the network connection, the server happily sends pings for another 30+ seconds until a transport error appears.

  2. 1 and 2 simultaneously

  3. 1 and 2 and setting server.session-timeout = 5 in application.properties

My methodology for testing this is to:

  1. Connect a websocket from a laptop client to the Tomcat server
  2. Turn off network connection on the laptop using the physical switch
  3. Wait for Tomcat server events

How does a Spring FrameworkTomcat server quickly detect that a client has been disconnected or not responding to close the websocket?

mattm
  • 5,851
  • 11
  • 47
  • 77

4 Answers4

7

Application Events may help you.

PS: Annotation driven events

PS2: I made an example project for you

Hyperion
  • 382
  • 8
  • 16
  • My answer contains example project. Isn't enough ? – Hyperion Jul 04 '15 at 11:02
  • 1
    It's very good that you provide an example project, but GitHub projects vanish all the time. If it does, your answer won't be useful anymore. That's why I said that posts should be self-contained. You can always provide external resources for reference, but StackOverflow posts should contain some content. – Artjom B. Jul 04 '15 at 11:24
  • 1
    Is it necessary to use STOMP to use these events? Currently I am just using a "raw" websocket. – mattm Jul 04 '15 at 12:56
  • AFAIK, yes. But its not so hard that convert to raw websocket to SockJS - Stomp. It's better than "raw" ws – Hyperion Jul 04 '15 at 17:14
  • 1
    @Hyperion The basic WebSocketHandler has already has events for when a connection is established or closed: http://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/web/socket/WebSocketHandler.html The problem is that these events are not triggered in a timely fashion. I don't see how trying to detect disconnections at a more abstract layer (STOMP) is going to be helpful. – mattm Jul 05 '15 at 02:06
  • @mattm STOMP isnt a abstract layer, its a protocol and contains several features like subscription, heartbeat etc. Check [this](http://jmesnil.net/stomp-websocket/doc/) link. – Hyperion Jul 05 '15 at 12:02
  • 1
    @Hyperion I am already detecting disconnects at a lower layer, the plain websocket. Is STOMP adding a further detection mechanism that will detect disconnects before the lower layer? As far as I can tell, the answer is no because there is no configuration of this in your example. By adding STOMP I see I am adding complexity, but I don't see any advantage in addressing the problem of timeouts taking too long. – mattm Jul 05 '15 at 13:00
  • @mattm If you look at [specifications](https://stomp.github.io/stomp-specification-1.2.html), STOMP uses frames for comminucation. That means "CONNECT" frame sent over websocket on connection, and "DISCONNECT" frame sent on disconnection, and Spring's SessionDisconnectEvent can detect it. So without timeout your server detect disconnect ASAP. – Hyperion Jul 05 '15 at 13:06
  • 1
    @Hyperion The issue is not timely detection of explicit connections and disconnections. The problem is when the client loses its network connection, and no "DISCONNECT" message is sent. – mattm Jul 05 '15 at 14:42
  • @mattm I test my project on my android phone. I'm connect to my server, send a message and cut my phone's connection. After heartbeat time elapsed twice, SessionDisconnectEvent fired. So when the client loses its network connection, you will get SessionDisconnectEvent. PS: You might not need STOMP for that. Can you try just SessionDisconnectEvent on Spring with your "raw" websocket ? – Hyperion Jul 05 '15 at 15:47
  • 1
    @Hyperion The websocket already has an event for when websockets timeout, which I am using and work eventually. The problem is that the timeouts are not using the value I have configured in the Spring instructions. – mattm Jul 05 '15 at 20:12
3

ServletServerContainerFactoryBean simply configures the underlying JSR-356 WebSocketContainer through Spring configuration on startup. If you peek inside you'll see it's trivial.

From what I can see in Tomcat code about the handling of maxSessionIdleTimeout, the WsWebSocketContainer#backgroundProcess() method runs every 10 seconds by default to see if there are expired sessions.

I suspect also the pings you're sending from the server are making the session appear active, hence not helping with regards to the idle session timeout configuration.

As to why Tomcat doesn't realize the client is disconnected sooner, I can't really say. In my experience if a client closes a WebSocket connection or if I kill the browser it's detected immediately. In any case that's more to do with Tomcat not Spring.

Rossen Stoyanchev
  • 4,910
  • 23
  • 26
  • Tomcat sends ping msgs to the client. As at Tomcat 8.5 the Ping response form the client keeps the session alive. The 'maxSessionIdleTimeout' starts counting once the server doesn't get response to the Ping. – bsandhu Jun 06 '18 at 19:27
3

The approach I eventually took was to implement an application-layer ping-pong protocol.

  • The server sends a ping message with period p to the client.
  • The client responds to each ping message with a pong message.
  • If the server sends more than n ping messages without receiving a pong response, it generates a timeout event.
  • The client can also generate a timeout event if it does not receive a ping message in n*p time.

There should be a much simpler way of implementing this using timeouts in the underlying TCP connection.

mattm
  • 5,851
  • 11
  • 47
  • 77
0

As the question and the top-voted answer are quite old, wanted to add the easiest way to achieve the same with spring - websocket implementation.

Refer section 4.4.8. Simple Broker

@Configuration
@EnableWebSocketMessageBroker
public class WebSocketConfig implements WebSocketMessageBrokerConfigurer {

    private TaskScheduler messageBrokerTaskScheduler;

    @Autowired
    public void setMessageBrokerTaskScheduler(TaskScheduler taskScheduler) {
        this.messageBrokerTaskScheduler = taskScheduler;
    }

    @Override
    public void configureMessageBroker(MessageBrokerRegistry registry) {

        registry.enableSimpleBroker("/queue/", "/topic/")
                .setHeartbeatValue(new long[] {10000, 20000})
                .setTaskScheduler(this.messageBrokerTaskScheduler);

        // ...
    }
}

When returning the connect response frame, the websocket-server needs to send the hearbeat params to enable ping from the clientside.

If you are using SOCKJS implementation on the clientside, then no additional code is needed to add align to PING/PONG implementation.