Say I want to integrate the latest version of AwesomeTool into a docker container (say running CentOS), for which there is no RPM available in any repo (or the rpm is not up to date and I want to run the latest version).
So I download awesometool.tar.gz, unpack it, configure&&make&&make install. Then I realize it has installed stuff in all different locations, libraries, binaries, symlinks...and I need to get all that stuff into my Docker image during a "docker build".
Should I:
1) do the build during the docker image build, installing all the necessary tools (gcc, make, etc) beforehand, perhaps removing them after the install.
2) build the software outside the image, then find a way to copy or package all the install artifacts into the image (basically what you would do when making an RPM).
One way to make (2) easier would be to install into a PREFIX and copy that into the image during docker build, but it would need to copied to a matching location or shared objects may not link correctly.
An advantage of (1) is that the build is encapsulated in the Dockerfile, but may leave all these build artifacts lying around unless a very specific clean-up was done.
An advantage of (2) is the docker image is cleaner, but requires external build steps and possibly tricky work tracking down all the artifacts that need to be copied....which may change when a new version of AwesomeTool is released and therefore needs to be maintained.
How do others approach this problem?