Docker will provide some isolation between the programs in the image, and your host environment. In a docker image, one may package anything from a single binary, to a full environment (everything but the linux kernel).
See it as a convenience. It provides a convenient form to deploy programs that require a complex environment that may conflict with the programs installed on the host. For instance, if you're trying to package a webapp (e.g. some blog software), the docker container will allow you to ship your application code, along with a tested version of its interpreter (php, python, etc), a compatible webserver, and maybe a database environment -- together. From the perspective of the user installing your container/app, nothing else than the container is needed to run the app. It's all self-contained, and it's simpler than setting up a virtual machine.
If your binaries in the image depend on an ls
command, then you'd include that as well. Generally, in the image, you include a binary (the entry point), as well as all its dependencies.
If you're familiar with chroot
s, you may see a docker image as a fancy chroot where the network, and process address space are also isolated, in addition to the file system.
You can think of dockerhub as an app-store.