3

I want to do REAL remote JMX management into a docker container running a Spring Boot application:

architecture sketch

enter image description here

I've read a lot of documentation and my understanding is that this should be the server-side configuration:

java \
    -Djava.rmi.server.hostname=10.0.2.15 \
    -Dcom.sun.management.jmxremote.port=8600 \
    -Dcom.sun.management.jmxremote.rmi.port=8601 \
    -Dcom.sun.management.jmxremote.ssl=false \
    -Dcom.sun.management.jmxremote.authenticate=false \
    -Dcom.sun.management.jmxremote.local.only=false \
    -jar my-spring-boot-app.jar 

The url to use in JVisualVM should be service:jmx:rmi://10.0.2.15:8601/jndi/rmi://10.0.2.15:8600/jmxrmi.

BUT THIS FAILS (Failed to retrieve RMIServer stub) within JVisualVM (started on machine 1) - this is the log output:

Caused: java.io.IOException: Failed to retrieve RMIServer stub at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:369) at com.sun.tools.visualvm.jmx.impl.JmxModelImpl$ProxyClient.tryConnect(JmxModelImpl.java:549) [catch] at com.sun.tools.visualvm.jmx.impl.JmxModelImpl$ProxyClient.connect(JmxModelImpl.java:486) at com.sun.tools.visualvm.jmx.impl.JmxModelImpl.connect(JmxModelImpl.java:214)

IT WORKS if I change the server application configuration to -Djava.rmi.server.hostname=172.19.0.6 (I use a BRDIGE docker network ... therefore routing to 172.19.0.6 is possible). With this configuration I am able to do JMX monitoring if the JVisualVM is started on the Docker Host (machine 2). But this is NO REAL REMOTE management because routing to 172.19.0.6 is usually impossible.


Some additional informations:

Port 8600, 8601 are exposed and are shown as LISTEN:

pfh@workbench ~/temp/ % netstat -taupen | grep 860 tcp6 0 0 :::8600 :::* LISTEN 0 254349 - tcp6 0 0 :::8601 :::* LISTEN 0 254334 -

and telnet 10.0.2.15 8600 from machine 1 is possible.

I get the same wrong behavior with Java 1.8.0_111 and 1.7.0_80 on docker containers and docker host (running JVisualVM).

BTW: this configuration works if the Spring Boot application is running on machine 2 directly (without Docker).

I know that JMX usually negotiates random ports ... I try to make them explicit in my configuration. There is also one additional property -Dcom.sun.aas.jconsole.server.cbport=8602 that can be set but this did not solve the problem.

Where is my fault?

Pierre
  • 51
  • 1
  • 6
  • "Port 8600, 8601 are exposed" meaning you ran both `docker run -p 8600:8600 -p 8601:8601 somedockerimage` to have those ports on your host, as well as the docker image also `EXPOSE`s those ports internally (like in http://stackoverflow.com/a/32806333/995891)? – zapl Nov 30 '16 at 08:48
  • exactly ... I don't know if it important but the containers are started/managed by `docker-compose` ... I will test if this changes something – Pierre Nov 30 '16 at 09:02
  • @zapl: wow ... it works when doing it with `docker run` but not via `docker-compose up ` ... this is my `docker-compose.yml`: `my-spring-boot-service: ... ports: - "8610:8610" - "8611:8611"` – Pierre Nov 30 '16 at 09:31

2 Answers2

1

In my problem description I concealed the the docker container was started via docker-composewith this configuration:

my-spring-boot-service: ... ports: - "8610:8610" - "8611:8611"

... and this results in open ports which seem to be bound to all interfaces as you can see via docker inspect my-spring-boot-app:

"NetworkSettings": { "Bridge": "", "SandboxID": "ac1a27e2696fd4ac2fcddf6e0935716304e348203ddbe1a0f8e31114cc6e289b", "HairpinMode": false, "LinkLocalIPv6Address": "", "LinkLocalIPv6PrefixLen": 0, "Ports": { "8610/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "8610" } ], "8611/tcp": [ { "HostIp": "0.0.0.0", "HostPort": "8611" } ],

I cannot see a problem here ... but this seems to be the problem because if I start the container via docker itself (as suggested by @zapl)

docker run -p 8610:8610 -p 8611:8611 my-spring-boot-app-image

IT WORKS - BUT NOT THE WAY I WANT - I want to use docker-compose.

There is a difference between the two deployment ... docker inspect network <foo>.

... on the working docker network looks this way:

    "Options": {
        "com.docker.network.bridge.default_bridge": "true",
        "com.docker.network.bridge.enable_icc": "true",
        "com.docker.network.bridge.enable_ip_masquerade": "true",
        "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
        "com.docker.network.bridge.name": "docker0",
        "com.docker.network.driver.mtu": "1500"
    },

on the non-working docker-compose network looks this way:

    "Options": {},

Both container configurations use no explicit defined network but the default one.

QUESTIONS: Is there a configuration missing? Should I define a network explicitly in docker-compose?

docker-elk is a docker-compose based deployment. I configured the described configuration of the JMX interface and was able to remote JMX this machine.

My JMX configuration is exactly the same - MINE IS NOT WORKING :-(


OS/Arch: linux/amd64 docker version: 1.12.2 docker-compose version: 1.8.0, build f3628c7 and 1.9.0, build 2585387

Pierre
  • 51
  • 1
  • 6
1

maybe I should switch to JMXMP instead of JMXRMI - https://github.com/oracle/docker-images/tree/master/OracleCoherence/docs/5.monitoring

Pierre
  • 51
  • 1
  • 6