-2

I'm still new to Java and sockets. I have a program which listens to connections and if one comes in, it sends it to a class that handles the connection and uses ExecutorService to start the processing thread.

I want to limit the number of connections, so I found that a socket has a parameter for that. This is the main code:

public static void main(String[] args) throws IOException {
        ServerSocket server = new ServerSocket(123, 1);
        try {
            ConnectionListener listener = new ConnectionListener(server);
            listener.run();
        }
        catch (Exception exception) {
            exception.printStackTrace();
        }
    }

In this example I wanted to limit the connections to 1. I tried to bombard it with a lot of parallel executions of a python script that sends data to this port. But I never get a "connection refused" or an apparent delay of the connection, as if the limit is not obeyed.

What am I doing wrong?

Federico klez Culloca
  • 26,308
  • 17
  • 56
  • 95
Arpton
  • 7
  • 4
  • 1
    Does this answer your question? [Why aren't ServerSocket connections rejected when backlog is full?](https://stackoverflow.com/questions/33189782/why-arent-serversocket-connections-rejected-when-backlog-is-full) – Volkan Albayrak Jan 15 '20 at 13:09
  • Not really. Firstly, because I probably don't understand it, but also, it says that in Unix I should get a "connection refused". I am runnin on Linux in a VM, while the Java program is on the Windows host. But since the sending script is on Linux, I would expect from that answer that I should get a "connection refused". – Arpton Jan 15 '20 at 13:18
  • I don't believe `backlog` prevents multiple connections, rather it creates a queue of unaccepted connections. The reason client 2 doesn't get refused could be because client 1 was already accepted by the time client 2 made a request. I haven't used the `backlog` argument myself (never had to), but based on what I've read online, that seems to be the case. Try it out: have client 1 make a request, don't accept the request for client 1 server side. Connect with client 2, see if it gets refused. – Vince Jan 15 '20 at 14:10
  • 1
    Managed to hop on a computer to test it. Seems it's right: `backlog` creates a queue of unaccepted connections. Your 2nd client doesn't get refused because your 1st client was already accepted, so the backlog was not full, thus the 2nd client was not refused. – Vince Jan 15 '20 at 14:17
  • So, this is not even meant to limit accepted connections? That would explain it. Could you write this as an answer, then I could mark it as solved. – Arpton Jan 15 '20 at 14:40
  • The backlog is not the number of concurrent connections. Your question is founded on a fallacy. – user207421 Jan 16 '20 at 01:07

2 Answers2

1

You can just close the ServerSocket after you accept one connection. This will work for you.

And to get more insight into this problem you can refer to answer (Why aren't ServerSocket connections rejected when backlog is full?)

Umar Tahir
  • 585
  • 6
  • 21
  • Well yes, I could certainly do something manual, but I thought I should rather use something that is already implemented. As to the link, that is what I meant. Since Windows issues this RST thing, I should get a connection refused, which I don't. – Arpton Jan 15 '20 at 13:36
  • I think its already been answered in above comment. But anyhow In the space of time between socket calls to get accepted ( by accept() method call), incoming client connection requests are stored in a queue maintained by the operating system in your case its windows. Subsequent calls to accept() remove requests from the queue, or block if there are no waiting clients. The "backlog" argument controls the length of this queue. When a client requests a connection and the queue is full, the request will fail with a ConnectionException. – Umar Tahir Jan 15 '20 at 15:09
  • Otherwise it will be added in queue in your case subsequent call gets added to queue. – Umar Tahir Jan 15 '20 at 15:12
1

The backlog argument does not limit the amount of connections your program will have.

The backlog argument handles how many unaccepted connections your server can maintain. If you have a backlog of 1 and a client connected to your server, accepting the first connection would allow other connections to be accepted. However, if you don’t accept the first connection any future clients connecting to the server will be refused.

A backlog of 2 would support two unaccepted connections and refuse the third connection.


This behavior may differ depending on your platform. For more information, check out the answer to: Why aren't ServerSocket connections rejected when backlog is full?

The user who answered that question is a professional in the domain of networking, and has more content related to Java server sockets if you're interested further exposure to Q&A related to server sockets in Java.


† Although not all those posts are related to server socket directly (some may not be relevant at all, due to poor tagging strategies), I highly recommend checking them out on your free time - you may learn something that'll help you with future problems.

Vince
  • 14,470
  • 7
  • 39
  • 84
  • 1
    Or time it out, depending on the platform. And the backlog parameter is only a hint. The system can adjust it up or down: see [here](https://stackoverflow.com/a/33193011/207421). – user207421 Jan 16 '20 at 01:07