Before I begin, let me clear up a few misconceptions and define some terminology for users new and old. First off, docker images are more or less snapshots of a containers configurations. Everything from filesystems to network configurations are contained within an image and can be used to quickly create new instances (containers) of said image.
Containers are running instances of a particular image and that is where all the magic happens. Docker containers can be viewed as tiny virtual machines, but unlike virtual machines, the system resources are unanimously shared and has a few other features that VM's do not readily have. You can get more information about this in another stack overflow post.
Building an image is either done by saving a container (docker commit *container* *repoTag*
) or by building from a Dockerfile which is automated build instructions as if you were to make changes to a container yourself. It also gives end users a running "Transaction" of all the commands needed to get your app running.
To decrease build time ... of my Docker images
Correct me if I am wrong, but it would seem that you are trying to build your image for each new container. Docker images are only needed to spin up a container. Yes, building them does take a while especially for dockerfiles, but once they are built, it really takes a trivial amount of time to spin up a container with your desired app which is really all you need. Again, docker images are save states of previous container configurations, and loading a save state does not and should not consume a lot of time, so you really shouldn't be concerned with a dockerfiles build time.
~~~~~~~~~~~~~~~~~~~~~~~~~~
Despite this, working to decrease a Dockerfiles build time and containers end file size is still a valid question and turning to automated dependency resolutions is a common approach. In fact, I asked a similar question nearly 2 years ago, so it may possess some information that can aid in this endeavor.
However...
To decrease build time and reduce deployment time of my Docker images, I need to get the minimum size of context sent to these images.
To which Taco, a person who answered my earlier question, would have replied
Docker isn't going to offer you painless builds. Docker doesn't know what you want.
Yes, it certainly would be less of a hassle if Docker knew what you wanted from the get-go but the fact remains that you need to tell it exactly what you want if you are aiming for it to build with the best size and best time. However there is more than one way to obtain the best build time and/or build size.
- One blatantly obvious one, as Andreas Wederbrand has mentioned in
this very same post, that you could get the apps logs from a previous
run to verify what it does or doesnt need. Suppose you did build one
of your project apps by dumping all possible dependencies into it.
You could systematically take out all dependencies, run the app,
check for failure in its logs, add a dependency, check for output
difference. If output is the same, remove failed dependency,
otherwise keep the dependency.
If I wrote this particular command in a dockerfile it may go a little something like this, assuming the container is built from a linux system:
#ASSUMING LINUX CONTAINER!
...
WORKDIR path/to/place/project
RUN mkdir dependencyTemp
COPY path/to/project/and/dependencies/ .
#Next part is written in pseudo code for the time being
RUN move all dependencies to dependencyTemp \
&& run app and store state and logs\
&& while [$appState != running]; do {\
add dependency to folder && run app and store state and logs \
if [$logsOriginal == $logsNew]; then remove dependency from folder \
else keep dependency && logsOriginal = logsNew fi}
This however is terribly inefficient as you are starting and stopping your application internally to find the dependencies needed for your app resulting in a terribly long build time. True, it would somewhat counter the issue of finding the dependencies yourself and reduce some size, but It may not work 100% of the time and its probably going to take less time for you to find what dependencies are needed to run your app as opposed to designing the code to escape that gap.
- Another solution/alternative, albeit more complicated, is to link
containers via networking. Networking containers has remained a
challenge for me, but its straightforward in what you would want it
to accomplish with it. Say you spin up 3 containers, 2 of which are
projects, the other a dependency container. Through the network, one
container can reference the dependency container and obtain all needed dependencies similar to your current setup. Unlike yours however, the dependencies are not located on the app which means your other apps can be built with bare minimum size and time.
However, should the dependency container go down, then the other apps would go down as well which may not result in a stable system for the long run. Additionally you would have to stop and start the every container every time you needed to add a new dependency or project.
- Lastly, if your containers are going to be kept locally you could
look into volumes. Volumes are a nifty way of mounting file systems
to active containers so applications within the containers can
reference files that are not explicitly there. This translates to a
more elegant docker build as all dependencies can legitimately be
"shared" without having to be explicitly included.
Since its a live mount, you can add dependencies and files to update all your apps that need them simultaneously as an added bonus. However volumes do not work very well when looking to scale your projects beyond your local system and are subject to local tampering.
~~~~~~~~~~~~~~~~~~
The bottom line is docker can not auto-resolve dependencies for you and the workarounds for it are far too complicated and/or time consuming to even remotely consider possible for your desired solution since it would be much faster if you were figure out and specify the dependencies yourself. If you want to go out and develop the strategy yourself, go right ahead.