0

I am running a mysql cluster with 4 data nodes, and 2 servers with a access(API) and managment node on each.

If I create a table using API node 1 it shows as not existing if I try access it on API node 2. Can anyone explain why this is or how to correct it. The point of running 2 API nodes on 2 separate servers is for redundancy.

Please see the SHOW config below (I have removed my ips):

Cluster Configuration
---------------------
[ndbd(NDB)]     4 node(s)
id=5    @*.*.*.*  (mysql-5.6.19 ndb-7.3.6, Nodegroup: 0)
id=6    @*.*.*.*  (mysql-5.6.19 ndb-7.3.6, Nodegroup: 0, *)
id=7    @*.*.*.*  (mysql-5.6.19 ndb-7.3.6, Nodegroup: 1)
id=8    @*.*.*.* (mysql-5.6.19 ndb-7.3.6, Nodegroup: 1)

[ndb_mgmd(MGM)] 2 node(s)
id=1    @*.*.*.*  (mysql-5.6.19 ndb-7.3.6)
id=2    @*.*.*.*  (mysql-5.6.19 ndb-7.3.6)

[mysqld(API)]   2 node(s)
id=3    @*.*.*.*  (mysql-5.6.19 ndb-7.3.6)
id=4    @*.*.*.*  (mysql-5.6.19 ndb-7.3.6)

If you require more information to answer please ask and I will update my question.

1 Answers1

1

Are you using the correct storage engine? If a table is to be "clustered" (stored on cluster datanodes) you should use engine=ndbcluster.

Tables created as innodb or myisam will be stored locally on the node it is created against and not accessable from other mysql api nodes.

To convert a storage node into ndb engine can be done via an alter table.

alter table engine=ndbcluster;

Muran
  • 67
  • 3