0

I have setup MySQL cluster on my PC using mysql/mysql-cluster image on docker hub, and it starts up fine. However when I try to connect to the cluster from outside docker (via the host machine) using clusterJ it doesn't connect.

Initially I was getting the following error: Could not alloc node id at 127.0.0.1 port 1186: No free node id found for mysqld(API)

So I created a custom mysql-cluster.cnf, very similar to the one distributed with the docker image, but with a new api endpoint:

[ndbd default]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M


[ndb_mgmd]
NodeId=1
hostname=192.168.0.2
datadir=/var/lib/mysql

[ndbd]
NodeId=2
hostname=192.168.0.3
datadir=/var/lib/mysql

[ndbd]
NodeId=3
hostname=192.168.0.4
datadir=/var/lib/mysql

[mysqld]
NodeId=4
hostname=192.168.0.10

[api]

This is the configuration used for clusterJ setup:

com.mysql.clusterj.connect:
    host: 127.0.0.1:1186
    database: my_db

Here is the docker-compose config:

version: '3'

services:
    #Sets up the MySQL cluster ndb_mgmd process
    database-manager:
        image: mysql/mysql-cluster
        networks:
            database_net:
                ipv4_address: 192.168.0.2
        command: ndb_mgmd
        ports:
            - "1186:1186"
        volumes:
            - /c/Users/myuser/conf/mysql-cluster.cnf:/etc/mysql-cluster.cnf

#  Sets up the first MySQL cluster data node
database-node-1:
    image: mysql/mysql-cluster
    networks:
        database_net:
            ipv4_address: 192.168.0.3
    command: ndbd
    depends_on:
        - database-manager

#  Sets up the second MySQL cluster data node
database-node-2:
    image: mysql/mysql-cluster
    networks:
        database_net:
            ipv4_address: 192.168.0.4
    command: ndbd
    depends_on:
        - database-manager

#Sets up the first MySQL server process
database-server:
    image: mysql/mysql-cluster
    networks:
        database_net:
            ipv4_address: 192.168.0.10
    environment:
        - MYSQL_ALLOW_EMPTY_PASSWORD=true
        - MYSQL_DATABASE=my_db
        - MYSQL_USER=my_user
    command: mysqld

networks:
    database_net:
        ipam:
            config:
                - subnet: 192.168.0.0/16

When I try to connect to the cluster I get the following error: '127.0.0.1:1186' nodeId 0; Return code: -1 error code: 0 message: .

I can see that the app running ClusterJ is registered to the cluster, but then it disconnects. Here is a excerpt from the docker mysql manager logs:

database-manager_1  | 2018-05-10 11:18:43 [MgmtSrvr] INFO     -- Node 3: Communication to Node 4 opened
database-manager_1  | 2018-05-10 11:22:16 [MgmtSrvr] INFO     -- Alloc node id 6 succeeded
database-manager_1  | 2018-05-10 11:22:16 [MgmtSrvr] INFO     -- Nodeid 6 allocated for API at 10.0.2.2

Any help solving this issue would be much appreciated.

1 Answers1

0

Here is how ndb_mgmd handles the request to start the ClusterJ application. You connect to the MGM server on port 1186. In this connection you will get the configuration. This configuration contains the IP addresses of the data nodes. To connect to the data nodes ClusterJ will try to connect to 192.168.0.3 and 192.168.0.4. Since ClusterJ is outside Docker, I presume those addresses point to some different place.

The management server will also provide a dynamic port to use when connecting to the NDB data node. It is a lot easier to manage this by setting ServerPort for NDB data nodes. I usually use 11860 as ServerPort, 2202 is also popular to use.

I am not sure how you mix a Docker environment with an external environment. I assume it is possible to solve somehow by setting up proper IP translation tables in the correct places.