0

I've followed this tutorial, and have a 3 server cluster set up behind a NGINX reverse proxy.

https://www.digitalocean.com/community/tutorials/how-to-configure-a-galera-cluster-with-mariadb-on-ubuntu-18-04-servers

I can create a database or table on any of the servers in the cluster, and it is replicated nicely across. I have then exported all tables from our app, first as one sql dump, and imported it in to one of the nodes.

Some of the larger tables (we're talking about 1gig in total, not massive data here) created and imported with data on the node I'm importing on, but didn't replicate across to the other two notes.

So I dropped the database, then imported a structure only, this was fine. I them exported one file per table :/

Importing all of the small tables was fine, but the larger imports again, only imported to the node I'm importing in to.

I've set my load balance to only send traffic to that 'master' node for now.

Is there a way to force flush the data across the 3 servers?

Server setup:

  • 3 Identical Ubuntu 18 VM's
  • Same physical host
  • 10G internal network
MrPHP
  • 163
  • 8

1 Answers1

1

See this discussion about Galera's "critical read" solution: http://mysql.rjweb.org/doc.php/galera#critical_reads

This guarantees that all the data has been stored on the receiving node.

Rick James
  • 2,463
  • 1
  • 6
  • 13