0

When I start the data nodeid 1 (10.1.1.103) of MySQL Cluster 8.0 on Ubuntu 22.04 LTS I am getting the following error:

# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 17:16:55 [ndbd] INFO     -- Angel connected to '10.1.1.102:1186'
2023-01-02 17:16:55 [ndbd] INFO     -- Angel allocated nodeid: 2

When I start data nodeid 2 (10.1.1.105) I get the following error:

# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 11:10:04 [ndbd] INFO     -- Angel connected to '10.1.1.102:1186'
2023-01-02 11:10:04 [ndbd] ERROR    -- Failed to allocate nodeid, error: 'Error: Could not alloc node id at 10.1.1.102:1186: Connection done from wrong host ip 10.1.1.105.'

The management node log file reports (on /var/lib/mysql-cluster/ndb_1_cluster.log):

2023-01-02 11:28:47 [MgmtSrvr] INFO     -- Node 2: Initial start, waiting for 3 to connect,  nodes [ all: 2 and 3 connected: 2 no-wait:  ]

What is the relevance of failing to open: /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory?

Why is data node on 10.1.1.105 unable to allocate a nodeid?

I initially installed a single Management Node on 10.1.1.102:

wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-8.0/mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar

tar -xf mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar

dpkg -i mysql-cluster-community-management-server_8.0.31-1ubuntu22.04_amd64.deb

mkdir /var/lib/mysql-cluster
vi /var/lib/mysql-cluster/config.ini

The configuration set up on config.ini:

[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2  # Number of replicas

[ndb_mgmd]
# Management process options:
hostname=10.1.1.102 # Hostname of the manager
datadir=/var/lib/mysql-cluster  # Directory for the log files

[ndbd]
hostname=10.1.1.103 # Hostname/IP of the first data node
NodeId=2      # Node ID for this data node
datadir=/usr/local/mysql/data # Remote directory for the data files

[ndbd]
hostname=10.1.1.105 # Hostname/IP of the second data node
NodeId=3      # Node ID for this data node
datadir=/usr/local/mysql/data # Remote directory for the data files

[mysqld]
# SQL node options:
hostname=10.1.1.102 # In our case the MySQL server/client is on the same Droplet as the cluster manager

I then started and killed the running server and created a systemd file for Cluster manager:

ndb_mgmd -f /var/lib/mysql-cluster/config.ini

pkill -f ndb_mgmd

vi /etc/systemd/system/ndb_mgmd.service

Adding the following configuration:

[Unit]
Description=MySQL NDB Cluster Management Server
After=network.target auditd.service

[Service]
Type=forking
ExecStart=/usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure

[Install]
WantedBy=multi-user.target

I then reloaded the systemd daemon to apply the changes, started and enabled the Cluster Manager and checked its active status:

systemctl daemon-reload

systemctl start ndb_mgmd
systemctl enable ndb_mgmd

Here is the status of the Cluster Manager:

# systemctl status ndb_mgmd
● ndb_mgmd.service - MySQL NDB Cluster Management Server
     Loaded: loaded (/etc/systemd/system/ndb_mgmd.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2023-01-01 08:25:07 CST; 27min ago
   Main PID: 320972 (ndb_mgmd)
      Tasks: 12 (limit: 9273)
     Memory: 2.5M
        CPU: 35.467s
     CGroup: /system.slice/ndb_mgmd.service
             └─320972 /usr/sbin/ndb_mgmd -f /var/lib/mysql-cluster/config.ini

Jan 01 08:25:07 nuc systemd[1]: Starting MySQL NDB Cluster Management Server...
Jan 01 08:25:07 nuc ndb_mgmd[320971]: MySQL Cluster Management Server mysql-8.0.31 ndb-8.0.31
Jan 01 08:25:07 nuc systemd[1]: Started MySQL NDB Cluster Management Server.

I then set up a data node on 10.1.1.103, installing dependencies, downloading the data node and setting up its config:

apt update && apt -y install libclass-methodmaker-perl

wget https://dev.mysql.com/get/Downloads/MySQL-Cluster-8.0/mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar

tar -xf mysql-cluster_8.0.31-1ubuntu22.04_amd64.deb-bundle.tar

dpkg -i mysql-cluster-community-data-node_8.0.31-1ubuntu22.04_amd64.deb

vi /etc/my.cnf

I entered the address of the Cluster Management Node in the configuration:

[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=10.1.1.102  # location of cluster manager

I then created a data directory and started the node:

mkdir -p /usr/local/mysql/data

ndbd

This is when I got the "Failed to open" error result on data nodeid 1 (102.1.1.103):

# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
2023-01-02 17:16:55 [ndbd] INFO     -- Angel connected to '10.1.1.102:1186'
2023-01-02 17:16:55 [ndbd] INFO     -- Angel allocated nodeid: 2

UPDATED (2023-01-02)

Thank you @MauritzSundell. I corrected the (private) IP addresses above and no longer got:

# ndbd
Failed to open /sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory
ERROR: Unable to connect with connect string: nodeid=0,10.1.1.2:1186
Retrying every 5 seconds. Attempts left: 12 11 10 9 8 7 6 5 4 3 2 1, failed.
2023-01-01 14:41:57 [ndbd] ERROR    -- Could not connect to management server, error: ''

Also @MauritzSundell, in order to use the ndbmtd process rather than the ndbd process, does any alteration need to be made to any of the configuration files (e.g. /etc/systemd/system/ndb_mgmd.service)?

What is the appropriate reference/tutorial documentation for MySQL Cluster 8.0? Is it MySQL Cluster "MySQL NDB Cluster 8.0" on: https://downloads.mysql.com/docs/mysql-cluster-excerpt-8.0-en.pdf

Or is it "MySQL InnoDB Cluster" on: https://dev.mysql.com/doc/refman/8.0/en/mysql-innodb-cluster-introduction.html

Not sure I understand the difference.

  • Error says 10.1.1.2 is expected ip address for management server, but your config example says hostname=10.1.1.102. Can that be it? – Mauritz Sundell Jan 02 '23 at 10:46
  • A side note, ndbd process always uses one thread for processing. One can use ndbmtd instead which can be configured to use more threads and cpus. – Mauritz Sundell Jan 02 '23 at 10:49
  • Thank you. I have updated my question with the IP address corrections. Greatly appreciate your response. – Steven J. Garner Jan 02 '23 at 17:48
  • Correct documentation is "MySQL NDB Cluster 8.0" excerpt (also available in MySQL reference manual https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster.html). Ndb data nodes forms a scalable data storage cluster in itself. MySQL servers do not keep any table data themself, but access the data from ndb cluster. In Innodb cluster, you form a cluster of MySQL servers that each keep table data in its self using innodb tables. Using group replication to keep data in sync between servers. One server is the primary that handles writes. – Mauritz Sundell Jan 02 '23 at 18:39
  • The error `/sys/devices/system/cpu/cpu0/cache/index3/shared_cpu_list: No such file or directory` should not be critical, data node should start anyway. – Mauritz Sundell Jan 02 '23 at 18:42
  • Using ndbmtd should work with exactly the same configuration (for example [ndbd] should still be [ndbd]). To make use of more cpus and threads one should later add configuration to use more threads, look for ndbmtd in https://dev.mysql.com/doc/refman/8.0/en/mysql-cluster-ndbd-definition.html – Mauritz Sundell Jan 02 '23 at 18:46
  • For failure starting second data node (NodeId=3) it may be that you need to add `--initial` when restarting `ndb_mgmd`. Otherwise it will use the old configuration that it caches in files named like `ndb_1_config.bin.1`. – Mauritz Sundell Jan 02 '23 at 18:53

0 Answers0