-1

I am working on a PHP/mysql program that needs its own mysql table and I want to include a sample database for testing/learning purposes.

I want use a PHP installation script to automate the creation of the mysql table and inserting the sample database.

The latest versions of mysql now set the engine type to InnoDB and I can successfully create the mysql database using PHP - which defaults to the InnoDB type.

The problem comes when I try to import the sample database (from a csv file) - out of 1800 records only 500 records are imported before PHP times out.

I have come up with a possible solution.

  1. Create a mysql database with MyISAM type - using CREATE TABLE $table_name ...... ENGINE=MyISAM

  2. Import the records from the csv file into the MyISAM table - using INSERT INTO $table_name .......

  3. Finally change the database type from MyISAM to InnoDB - using ALTER TABLE $table_name ENGINE = InnoDB

This three step process works MUCH faster and completes well before the PHP script times out.

I have checked the InnoDB table and data using phpmyadmin and all appears to be OK.

Can anyone find fault with this method and if so can you offer an easy solution.

Monty Davies
  • 21
  • 1
  • 5
  • Are you entering all the lines in a single transaction? starting and committing transactions is expensive, and I guess even more on innoD?B. Try to start the transaction before the import, and commit once completed (you might want to turn autocommit off). See http://php.net/manual/en/mysqli.begin-transaction.php – Oliver Jan 17 '16 at 11:19
  • Check your php setup for `max_execution_time` http://php.net/manual/en/info.configuration.php#ini.max-execution-time, and do not insert values one by one, use: `INSERT INTO tbl_name (a,b,c) VALUES(1,2,3),(4,5,6),(7,8,9);` – manRo Jan 17 '16 at 11:25
  • Please read as well http://dev.mysql.com/doc/refman/5.0/en/optimizing-innodb-bulk-data-loading.html – manRo Jan 17 '16 at 12:12
  • Suggestion: Switch to `microtime(true)`. – Rick James Jan 17 '16 at 18:27
  • No wonder [innoDB](http://stackoverflow.com/questions/2222861/why-do-mysql-innodb-inserts-updates-on-large-tables-get-very-slow-when-there-a) [is](http://dba.stackexchange.com/questions/65753/slow-insert-update-on-innodb) [slow](https://www.quora.com/Why-would-an-INSERT-query-be-slow-in-MySQL) [while](https://www.percona.com/forums/questions-discussions/mysql-and-percona-server/11370-innodb-inserts-updates-per-second-is-too-low) [inserting](http://stackoverflow.com/questions/9819271/why-is-mysql-innodb-insert-so-slow) – Ejaz Jan 17 '16 at 18:43

2 Answers2

0

This happens to all apache php and mysql installation. You need to up the Apache max execution time in order to make php upload large files into mysql.

I would recommend you carefully study php.ini file and understand how it's controlled in the backend.

unixmiah
  • 3,081
  • 1
  • 12
  • 26
  • Thanks. I would prefer not to change the max execution time from whatever the default value is since I could not expect users of my program to be able to make changes to php.ini on their own servers. If this is the only alternative then I see no reason why Method 2 could not be used since this runs without any modification to php.ini - unless there are reasons why there are 'hidden problems' with tables that have been altered from MyISAM to InnoDB type? – Monty Davies Jan 17 '16 at 17:05
  • As an aside - I have been able to perform Method 1 by enclosing the routine with: set_time_limit(0); .... routine ...... set_time_limit(60); But, I would not prefer to go down this route since I am not sure Hosting Companies would approve. – Monty Davies Jan 17 '16 at 17:06
0

The processing would be even faster if you did not do so much work.

LOAD DATA INFILE ...

will load the entire CSV file in one step, without your having to open + read + parse + INSERT each row one by one.

If you need to manipulate any of the columns, then these steps are more general, yet still much faster than either of your methods:

CREATE TABLE tmp ... ENGINE=CSV;  -- and point to your file
INSERT INTO real_table
    SELECT ... the columns, suitably manipulated in SQL
        FROM tmp;

No loop, no OPEN, no parsing.

Rick James
  • 135,179
  • 13
  • 127
  • 222