1

server.pl

sub sock_initialize {
    my $sock  = q{};
    my $port  = q{};

    # Get a port for our server.
    $sock = IO::Socket::INET->new(
        Listen    => SOMAXCONN,    # listen queue depth
        LocalPort => 0,
        Reuse     => 1
    );

    die "Unable to bind a port: $!" if !$sock;

    $port      = $sock->sockport();
    my $ip = "";
    my $uid = (getpwuid( $> ))[2];
    my $queue = join(":", $ip, $port, $$, $uid);

    print sprintf("put started on port $port ($$), SOMAXCONN=%d\n", SOMAXCONN);
    return $sock;
} ## end sub sock_initialize

my $listen_sock = sock_initialize();
while (1) {
    #my $xsock = Accept();
    my $xsock;
    while (1) {
        $! = 0;
        # Accept can block.  Need to use nonblocking poll (Stevens)
        $xsock = $listen_sock->accept;    # ACCEPT
        last if defined $xsock;
        next if $! == EINTR;
        die "accept error: $!";
        if ( defined $xsock ) {
            $xsock->blocking(0);    # mark executor socket nonblocking
            $xsock->sockopt( SO_KEEPALIVE() => 1 ) or die "sockopt: $!";
        }


    #my $rbufp = $conn->readbufref;
    #my $rdstatus = Read( $sock, $rbufp );
    my $buff = "";

    while (1) {
        $! = 0;
        # Accept can block.  Need to use nonblocking poll (Stevens)
        $xsock = $listen_sock->accept;    # ACCEPT
        last if defined $xsock;
        next if $! == EINTR;
        die "accept error: $!";
        if ( defined $xsock ) {
            $xsock->blocking(0);    # mark executor socket nonblocking
            $xsock->sockopt( SO_KEEPALIVE() => 1 ) or die "sockopt: $!";
        }
    }

    #my $rbufp = $conn->readbufref;
    #my $rdstatus = Read( $sock, $rbufp );
    my $buff = "";
    while (1) {
        my $nbytes = sysread $xsock, $buff, 32768, length($buff);    # SYSCALL

        if ( !defined $nbytes ) {                            # read error
            next if $! == EINTR;
            last if $! == EWOULDBLOCK;                       # normal
            return;
        }
        last if $nbytes == 0;                            # EOF
    }
    print "received $buff\n";
    last;
}

client.pl

my $host = "localhost";
my $port = 37402; # get port number from server.pl


my $s = IO::Socket::INET->new (PeerAddr => $host,
                                        PeerPort => $port,
                                        Type     => SOCK_STREAM,
                                        Proto    => 'tcp',
                                        Timeout  => 1);
if ($s) {
    $s->blocking (0) ;
}

my   $nbytes = syswrite $s, "hi from X";   # SYSCALL

First, i would start server.pl

$perl test_socket_server.pl 
$put started on port 37402 (16974), SOMAXCONN=128

Then i would put the port number on client.pl

perl test_socket_client.pl

then, on server.pl shell, i see

received hi from X

So, it's working as intended. Now when I put the server.pl inside a container via

docker run ubuntu perl server.pl
put started on port 38170 (1), SOMAXCONN=128

And then, I would write the port number in client.pl and run it, but server.pl does not get the message

My understanding is that container port isnt expose to host's via EXPOSE

Now, even if the problem could be fixed with EXPOSE, server.pl connects on an unassigned port i.e the pornumber can change every time it runs LocalPort => 0, server.pl is run inside the container. My understanding is you have to expose the port at container runtime but at this time, you dont know which port server.pl will run at. And I would like it to be this way, no designated port, as multiple instances of server.pl could be run in a container (so need to be able to use different ports). Is there a strategy to accommodate this?

can you expose port ranges, perhaps, 30000 and upwards when starting container? [I've read some other stack overflow questions on exposing port ranges but it seems to have some performance issues as a real process is forked per port (?) Docker expose all ports or range of ports from 7000 to 8000 The ideal solution would be somehow only exposing ports being used by the app which resides in a container at runtime. Maybe that is accomplished by an orchestrator?

ealeon
  • 12,074
  • 24
  • 92
  • 173
  • Could you link the post dealing with performance issues while using port ragens? And why exactly are you providing the source code for server and client? It is quite hard to get to the point and I think you are not really willing to change your code anyway to work with fixed ports, so even an `nc -l -p $((RANDOM%9999))` would have proven what you are trying to achieve, but I would omit these details. – Murmel Feb 13 '18 at 20:52
  • 1
    Redarding your question: You did not mention an argument againt using the network option `host`, like: `--net=host` so you would not need to bind/expose ports in the first place, instead ports get bound directly when needed – Murmel Feb 13 '18 at 20:54
  • @Murmel it's a legacy code that's being containerized and would want to work as-is in a container w/o changing the application. the code is versioned and if we make infra changes (containerized) we would have to apply the fix/featuer to all prev versions i.e. --net=host – ealeon Feb 13 '18 at 22:34
  • Ok, I'm not quite sure if you got the point of `--net=host` option being part of the docker API (and was not a suggestion to be added to your application as additional parameter), so I added this point to the list of possible options to my answer. But could be, that I missed your point, why this is not an option for you. – Murmel Feb 13 '18 at 23:41

1 Answers1

3

Option 1 - Docker swarm & overlay network

Containers (to be more precise: services) within the overlay network of a Docker swarm expose all ports per default. The overlay network acts more or less as private network in this case. Hence, your client application would only be able to connect to server container/service, if it is also part of the swarm.
However, if this is not the case, you could still solve this situation by using the service update --publish-add API. This command provides the possibility to change port exposure at runtime (which is more or less what you asked for), for further information have a look at the Publish ports on an overlay network section:

Swarm services connected to the same overlay network effectively expose all ports to each other. For a port to be accessible outside of the service, that port must be published using the -p or --publish flag on docker service create or docker service update. Both the legacy colon-separated syntax and the newer comma-separated value syntax are supported. The longer syntax is preferred because it is somewhat self-documenting.

This will be the best option if you want to use the Docker API to solve your problem and it will also perfectly deal with multi-host scenarios. But for the advanced part (where your client is not part of the swarm and you want to use the service update), you will have to wrap your container into a service.
Note: The --publish-add was not really thought through, as the docker service update works by restarting the container and therefore your application would very likely switch to another port, this could be an option for a slightly different use case.

Option 2 - --net=host

If all your server containers should run on one host anyway, you could easily use the --net=host feature of Docker and you don't have to explicitly expose any port.

Example:

docker run --net=host ubuntu perl server.pl

Option 3 - port proxying

You could also use applications like socat or ssh to manually map the port of the server application to a predefined port for every container. See for example this answer from Exposing a port on a live Docker container.

Murmel
  • 5,402
  • 47
  • 53
  • 1
    thank you for this answer. i think what i need to use is macvlan network, instead of overlay network so client.pl doesnt need to be in a container and be able to connect to it – ealeon Feb 14 '18 at 10:55