0

I am running the command ALTER TABLE beijing_xiaohongmen TABLESPACE ts_1 STORAGE DISK ENGINE NDBCLUSTER; where ts_1 has two .dat file with 1024M each and table beijing_xiaohongmen is an innodb table.

However, I meet ERROR 1114 (HY000): The table '#sql-4d3e_a' is full. By the way, command ALTER TABLE beijing_xiaohongmen ENGINE NDBCLUSTER; performs perfectly, which means I have enough DataMemory and IndexMemory. I've checked my disk and RAM as well. They both have enough space.

Here is my setting:

some configurations in config.ini:

[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=1    # Number of fragment replicas
DataMemory=2048M    # How much memory to allocate for data storage
IndexMemory = 2048M  # How much memory to allocate for index storage
MaxNoOfExecutionThreads = 8
# NoOfFragmentLogFiles = 1125

and my.conf

[mysqld]
# Options for mysqld process:
ndbcluster                      # run NDB storage engine
datadir=/usr1/DBData/mysql_cluster/mysqld/
socket=/tmp/mysql.sock
user=mysql
max_heap_table_size = 1024M
tmp_table_size = 2048M


[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=x.xxx.x.xxx   # location of management server


[mysqld_safe]
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

Does anyone can help? Thanks a lot!

Progman
  • 16,827
  • 6
  • 33
  • 48

1 Answers1

0

Interesting, after several days of search and struggling, here comes my luck. Though I still did not solve the error, I find another way to do the same task. If you have the solution, please just write it down.

First, create target table structure with NDB engine and in ts_1 tablespace.

Second, use insert * into [tablename] select * from [oldtable]. Remember to enlarge MaxNoOfConcurrentOperations. NDB engine is very weird. It doesn't insert into db really untill read all insert command. This attribute is terrible.

Meanwhile, data node may break down if you insert a huge table (100w rows for me). So you can divide it into pieces (I insert 50w each time using limit). And you may need to use order by id asc because I find that select may not return data in asc order though id is my primary key and data in old table is in asc order.

Finally, the table converts into a NDB engine table in tablespace on disk.

By the end, I want to say it again, mysql-cluster is awful, or I am awful (maybe this makes more sense hhh). It takes me about 3G to store a 380w table??! What I know is that I must keep adding add .dat file into TABLESPACE over and over again. I also feel terrible managing file space for a famous database. Of course, mysql-cluster still have a lot for improvement.

  • If you use VARCHAR (or VARBINARY) columns they will always take their maximal size on disk. That is, if you have a VARCHAR(100) CHARSET=utf8mb4 column it will always use 400 bytes on disk. Is that the case? (Same for in-memory if columns is a PRIMARY KEY or UNIQUE KEY) – Mauritz Sundell Sep 15 '22 at 09:16
  • Big transactions are typically only a problem when doing initial loads of data, often in these cases the transactions guarantees are needed and can be turned off using `ndb_use_transactions` variable (ALTER TABLE ... ENGINE NDB does that implicitly). From within a mysql session: `SET ndb_use_transactions=0; ... do import statements ...: SET ndb_use_transactions=1;`. – Mauritz Sundell Sep 15 '22 at 09:24
  • `SELECT` without `ORDER` clause is not guaranteed to result in ordered result. It typically comes for free for InnoDB tables when query executor choose to scan table. NDB spread out row data on all data nodes using hash index on primary key (or part of it). One can mimic the same behavior using a partitioned InnoDB table, `CREATE TABLE t (pk int PRIMARY KEY, ...) PARTITION BY KEY(pk) PARTITIONS(8)`, insert many rows and do `SELECT * FROM t`. – Mauritz Sundell Sep 15 '22 at 09:51