1

I am getting this error when copying table to table, even there is plenty of space ( 36 GB) and memory ( 09 GB).

insert into time_series_funds_mem select * from time_series_funds;
MySQL said: Documentation
#5 - Out of memory (Needed 260748 bytes) 

The table time_series_funds has about 16 million row, and total size of ~3.2 Gb, the storage engine is InnoDb. And this is my my.cnf file:

# The following options will be passed to all MySQL clients
[client]
port        = 3300
socket      = "${path}/binaries/mysql/mysql.sock"

# Here follows entries for some specific programs

# The MySQL server
[mysqld]

tmp_table_size=7096M
max_heap_table_size=7096M


# The TCP/IP Port the MySQL Server will listen on
port = 3300

#Path to installation directory. All paths are usually resolved relative to this.
basedir = "${path}/binaries/mysql/"

#Path to the database root
datadir = "${path}/binaries/mysql/data/"

# The default storage engine that will be used when create new tables
default-storage-engine = MYISAM

bind-address    = 127.0.0.1
socket          = "${path}/binaries/mysql/mysql.sock"
log_error       = "${path}/binaries/mysql/data/mysql_error.log"
skip-external-locking
key_buffer_size = 160M
max_allowed_packet = 100M
table_open_cache = 64
net_buffer_length = 8K


skip-external-locking
key_buffer_size = 16M
max_allowed_packet = 1M
table_open_cache = 64
sort_buffer_size = 512K
net_buffer_length = 8K
read_buffer_size = 256K
read_rnd_buffer_size = 512K
myisam_sort_buffer_size = 8M


log-bin=mysql-bin

# binary logging format - mixed recommended
binlog_format=mixed
server-id   = 1


innodb_data_home_dir = "${path}/binaries/mysql/data/"
innodb_data_file_path = ibdata1:200M:autoextend
innodb_log_group_home_dir = "${path}/binaries/mysql/data/"
innodb_buffer_pool_size = 3024M
innodb_log_file_size = 512M
innodb_log_buffer_size = 200M
innodb_flush_log_at_trx_commit = 1
#innodb_flush_method = normal

innodb_lock_wait_timeout = 50

innodb_write_io_threads = 16
[mysqldump]
quick
max_allowed_packet = 16M

[mysql]
no-auto-rehash
# Remove the next comment character if you are not familiar with SQL
#safe-updates

[isamchk]
key_buffer = 20M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[myisamchk]
key_buffer_size = 5000M
sort_buffer_size = 20M
read_buffer = 2M
write_buffer = 2M

[mysqlhotcopy]
interactive-timeout

This is the structure of table:

CREATE TABLE `time_series_funds_mem` (
 `id` int(11) NOT NULL AUTO_INCREMENT,
 `tickerid` int(10) unsigned NOT NULL,
 `ticker` varchar(6) COLLATE utf32_bin DEFAULT NULL,
 `fund_ticker` varchar(8) COLLATE utf32_bin NOT NULL,
 `date` date DEFAULT NULL,
 `open` decimal(13,4) unsigned DEFAULT NULL,
 `high` decimal(13,4) unsigned DEFAULT NULL,
 `low` decimal(13,4) unsigned DEFAULT NULL,
 `close` decimal(13,4) unsigned DEFAULT NULL,
 `adjusted_close` decimal(13,4) unsigned DEFAULT NULL,
 `volume` bigint(20) unsigned DEFAULT NULL,
 `dividend_amount` decimal(13,4) unsigned DEFAULT NULL,
 `split_coefficient` decimal(13,4) unsigned DEFAULT NULL,
 `return_perc` decimal(13,4) DEFAULT NULL,
 `has_return_perc` bit(1) NOT NULL DEFAULT b'0',
 `has_return_calc` bit(1) NOT NULL DEFAULT b'0',
 `return_YTD` decimal(13,6) NOT NULL,
 `return_1Y` decimal(13,6) NOT NULL,
 `return_3Y` decimal(13,6) NOT NULL,
 `return_5Y` decimal(13,6) NOT NULL,
 `return_all` decimal(13,6) NOT NULL,
 `var_95` decimal(13,6) NOT NULL,
 PRIMARY KEY (`id`),
 KEY `tickerid` (`tickerid`),
 KEY `ticker` (`ticker`)
) ENGINE=MEMORY DEFAULT CHARSET=utf32 COLLATE=utf32_bi

This is the file I am trying to import: http://167.99.242.73/funds.tar.gz.

H Aßdøµ
  • 2,925
  • 4
  • 26
  • 37
  • Your server does not have enough RAM to store Your data. 3.2GB maybe for InnoDB, but to keep data in memory it may consume more. Try to monitor memory in separate terminal using `htop` while You're trying to insert to mem table – num8er Feb 05 '19 at 13:06
  • Try to insert half of records to see how much it consumes. – num8er Feb 05 '19 at 13:13
  • @num8er I did that, and it didn't consume much memory (About 02gb only). – H Aßdøµ Feb 05 '19 at 18:39
  • how about writing small script that will take small portions of table and insert it? Like if it took 1000 items and insert until it reached the end of tables. Cause as I see it consumes memory for resultset and then for table that has memory engine. – num8er Feb 05 '19 at 18:48
  • Also You can make SQL or CSV dump file of table and import it to memory table. Using MySQL restore from dump procedure or using `LOAD` from CSV from MySQL console – num8er Feb 05 '19 at 18:53
  • @num8er If it fails to insert than it will fail also to import. – H Aßdøµ Feb 05 '19 at 20:03
  • It should not fail to import cause when You do: insert from select it consumes memory to fetch data to memory and then to store inserted data. but when You have sql dump it just runs every insert separately one by one. so memory is consumed just for storing memory table – num8er Feb 05 '19 at 21:29
  • @num8er I tried to import via CSV and same error msg. – H Aßdøµ Feb 06 '19 at 01:15
  • can You describe table fields and types? to just calculate real data size? I don't believe that it's 3.2GB, it's simple math `16000000 rows x 1024 bytes` makes 16GB. so please add table structure description to Your question. maybe there is something to compact. – num8er Feb 06 '19 at 06:15
  • @num8er I edited my queston and putted the table structure. – H Aßdøµ Feb 06 '19 at 17:09
  • nice. What kind of data You're storing in utf32_bin. Can You add example of one row? I'm going to compare with different encodings and use char instead of varchar to make rows have compact predictable size for performance. So need example of data too – num8er Feb 06 '19 at 20:05
  • @num8er All data are English text, and numbers. – H Aßdøµ Feb 06 '19 at 22:18
  • So I've done comparison and achieved such result: http://joxi.ru/12MVVEhl4QODAJ so simply change `varchar` to `char`, `utf32_bin` to `latin1_bin` (or `lating1_general_ci`), `utf32` to `latin1` and create table, then import data. so this time size of row will be 127 bytes than 171. I hope it will be imported... – num8er Feb 07 '19 at 02:12
  • @num8er I tried your suggestion and changed the types, unfortunately it still fail to import. What I found it strange that I have more than enough memory, but it fails to import. – H Aßdøµ Feb 07 '19 at 10:22
  • sad that I've to say: enable swap or create swap file (with size ~32gb) and add it – num8er Feb 07 '19 at 11:22
  • As I understand MyISAM had tablewise locks, InnoDB has slow performance for timeseries, and You want to keep all "garbage" in memory. So how about picking different database for timeseries data? For example I prefer to use postgres cause it has comparable better performance (I have tested with equal table strucutes) and very nice pl/sql language that helps to do analytical queries on big tables. – num8er Feb 07 '19 at 23:30
  • @num8er Eventually I have to increase swap file system capacity. can you put this as an answer and I will accept it. – H Aßdøµ Mar 12 '19 at 18:44
  • You can put answer Yourself. It's also correct to answer to Your own question. Since it's result of research on issue. – num8er Mar 12 '19 at 20:12
  • Thanks, I know this, but originally you suggest it. – H Aßdøµ Mar 12 '19 at 23:10
  • I'm not hunting for reputation. Glad that somehow helped. But You put Your solution or just delete question. – num8er Mar 13 '19 at 01:32

0 Answers0