2

I want to build on top of a windows docker container by installing a couple programs. The files total .5 GB and I want to keep the layers as small as possible. I am hoping I can run the setup files from the build-context, and then have the build-context swept away at the end so I don't have a needless copy of the source files for the setup.exe embedded in my container layers. However, I have not found an example where this is the case. Instead I mostly see people run a COPY command to a temporary build folder, run their setup, then remove the folder. Won't those files still be in the container layers because the COPY command creates a new layer when it's done?

I don't know if the container can see the build-context directly. I was hoping for some magical folder filled with the build-context files so I could run a script using it, but haven't found anything.

It seems like the alternative is to create a private file-server and perform a RUN that can download them from that private server and unpack them, run the install, and remove them (all as 1 docker step). I understand this would make it more available to others who need to rerun the build, but I'm not convinced we'll need to rerun it. It's not likely to change as the container will build patches for a legacy application. Just seems like a lot to host files on a private, public-facing server for something that will get called once every couple years if ever.

So are these my two options?

  1. Make a container with needless copies of source files embedded within
  2. Host the files on a private file server and download/install/remove them

Or am I missing another option or point about how the containers work?

Ben Zuill-Smith
  • 3,504
  • 3
  • 25
  • 44
  • 4
    Have you considered a multi-stage build? https://docs.docker.com/develop/develop-images/multistage-build/ – jonrsharpe Jun 25 '21 at 16:45
  • yeah, but that appeared to be a case where you have some simple output from one stage, and copy it to another container for the next stage. In my case, I have no output to copy to the next stage. The output is a full windows container with a couple extra installed applications. This container will eventually be used to build our app. – Ben Zuill-Smith Jun 25 '21 at 17:17

3 Answers3

1

It's a long shot as Windows is a tricky thing with file system, but you could do this way:

  • In your Dockerfile use a COPY command, install then RUN del ... to remove the installation files
  • Build your image docker build -t my-large-image:latest .
  • Run your image docker run --name my-large-container my-large-image:latest
  • Stop the container
  • Export your container filesystem docker export my-large-container > my-large-container.tar
  • Import the filesystem to a new image cat my-large-container.tar | docker import - my-small-image

Caveat is you need to run the container once which might not be what you want. And also I haven't tested with windows container, sorry.

Sirode
  • 379
  • 1
  • 2
  • 13
0

I usually do the download or copy in one step, then in the next step I do the silent installation and remove the installer.

# escape=`
FROM mcr.microsoft.com/dotnet/framework/wcf:4.8-windowsservercore-ltsc2016
SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop'; $ProgressPreference = 'SilentlyContinue';"]
ADD https://download.visualstudio.microsoft.com/download/pr/6afa582f-fa26-4a73-8cb9-194321e85f8d/ecea51ead62beb7acc73ad9799511ffdb3083ad384fe04ec50e2cbecfb426482/VS_RemoteTools.exe VS_RemoteTools_x64.exe
RUN Start-Process .\\VS_RemoteTools_x64.exe -ArgumentList @('/install','/quiet','/norestart') -NoNewWindow -Wait; `
  Remove-Item -Path C:/VS_RemoteTools_x64.exe -Force;

But otherwise, I don't think you can mount a custom volume while it's being built.

Antebios
  • 1,681
  • 1
  • 13
  • 22
  • Yes, but to my understanding, every command creates a layer. Layers are like a git commit. So your container now has the download file, forever embedded. This is fine for a tiny web-installer like in your example here. I'm looking at installing some legacy apps that don't have tiny installers. I'm literally copying .iso (CD) files to the container. I must provide ALL the source files for the install before the install can run. – Ben Zuill-Smith Jun 25 '21 at 17:08
  • Maybe something like this will help: https://vsupalov.com/cache-docker-build-dependencies-without-volume-mounting/ – Antebios Jun 26 '21 at 18:18
  • What about running the container as normal, do your installation & clean up, then tag or commit your running container as another image? – Antebios Jun 26 '21 at 18:23
  • that is certainly an option, but then there is no build context. So I have to set up a volume or a download server to host the source files. I am hoping to have a build script to share with people in an open source repo to replicate my success at setting this up easily. I wish docker had a way to host the build context thru layer mocking a web server or just a copy command that doesn't make a layer. – Ben Zuill-Smith Jun 26 '21 at 19:40
0

I didn't find a satisfactory answer to this. Docker seems designed for only the modern era and assumes you'll be able to download what you need via scripts and tools hitting APIs and file servers. The easiest option I found that I eventually went with was to host the files on a private file server or service (in my case, AWS S3).

I really wish there was a way to have files hosted by the docker daemon in some way, eg. if it acted like a temporary server that you could get data from via http instead of needing to COPY the files and create a layer. Alas, I found no such feature.

Taking this route made my container about a GB smaller.

Ben Zuill-Smith
  • 3,504
  • 3
  • 25
  • 44