I have a website with very specific JS code associated with a large, unchanging, MySQL database. I want to be able to distribute the website plus database as a single package for others to be able to run locally, so I have been investigating doing this using docker. Note that I'm not using docker for testing the running app: merely for distributing it for others to look at.
By my understanding, docker images that run a populated MySQL database usually load it up from a .sql
file after starting the DB. However, for my database contents, this results in an hour-long wait to populate the database, since the .sql dump is many gigabytes, and takes that long to load. Therefore I was thinking of loading the data into the running database once, keeping the db volume (/var/lib/mysql
) local to the image, and creating a snapshot of the image using docker commit
once the data has been loaded.
However, this approach seems to go against many standard docker recommendations: usually docker commit
is frowned upon, and /var/lib/mysql
is stored as a separate data-volume, not saved in the image itself. Nevertheless, my use-case seems different, as (a) the data in the database is not intended to change in the future (b) it takes a long time to load from a mysql dump and (c) the large data store (rather than just the js app code) is one of the main things that I actually want to include in the image.
So is my use-case a valid reason to break convention and use docker commit
together with saving the MySQL files in the image itself rather than a separate data volume? Or is there an alternative, more standard way of distributing a fully working, fully-populated web app with a large fixed database store?