I have written a script to restore a huge mysqldump file in parallel
https://gist.github.com/arunp0/4c34caee2432a6f11b91b99dfd2c5ef3
Is it okay split the dump file in parts and restore in parallel?
Or If there is any suggestions for improvements to reduce time in restoring
Just to Explain How the script works
sed -n -e '/DROP TABLE/,/--/p' "${restoreFile}" > "${tableMetaFile}"
Creates new sql file with DROP TABLE , CREATE TABLE Commands
then this file is restored (Not in parallel) - All the tables are created before restoring the data
# Creates new file with data only for one table (TableName)
grep -E "^INSERT INTO \`${TableName}\`" "${restoreFile}" > "tmp/${TableName}.sql"
# that file is split into as many chunks as the number of cpus
split -n l/$(nproc) --additional-suffix=.sql "tmp/${TableName}.sql" tmp/"${TableName}"/output.
# This comamnd is used to restore the file
pv -f tmp/meta/pre.sql "${item}" tmp/meta/post.sql | mysql --protocol=TCP --port "$PORT" --host "$HOST" -u "${USER}" -p"${PASSWORD}" -f "${DATABASE}" &